Search results for: computational methods
16692 Computational, Human, and Material Modalities: An Augmented Reality Workflow for Building form Found Textile Structures
Authors: James Forren
Abstract:
This research paper details a recent demonstrator project in which digital form found textile structures were built by human craftspersons wearing augmented reality (AR) head-worn displays (HWDs). The project utilized a wet-state natural fiber / cementitious matrix composite to generate minimal bending shapes in tension which, when cured and rotated, performed as minimal-bending compression members. The significance of the project is that it synthesizes computational structural simulations with visually guided handcraft production. Computational and physical form-finding methods with textiles are well characterized in the development of architectural form. One difficulty, however, is physically building computer simulations: often requiring complicated digital fabrication workflows. However, AR HWDs have been used to build a complex digital form from bricks, wood, plastic, and steel without digital fabrication devices. These projects utilize, instead, the tacit knowledge motor schema of the human craftsperson. Computational simulations offer unprecedented speed and performance in solving complex structural problems. Human craftspersons possess highly efficient complex spatial reasoning motor schemas. And textiles offer efficient form-generating possibilities for individual structural members and overall structural forms. This project proposes that the synthesis of these three modalities of structural problem-solving – computational, human, and material - may not only develop efficient structural form but offer further creative potentialities when the respective intelligence of each modality is productively leveraged. The project methodology pertains to its three modalities of production: 1) computational, 2) human, and 3) material. A proprietary three-dimensional graphic statics simulator generated a three-legged arch as a wireframe model. This wireframe was discretized into nine modules, three modules per leg. Each module was modeled as a woven matrix of one-inch diameter chords. And each woven matrix was transmitted to a holographic engine running on HWDs. Craftspersons wearing the HWDs then wove wet cementitious chords within a simple falsework frame to match the minimal bending form displayed in front of them. Once the woven components cured, they were demounted from the frame. The components were then assembled into a full structure using the holographically displayed computational model as a guide. The assembled structure was approximately eighteen feet in diameter and ten feet in height and matched the holographic model to under an inch of tolerance. The construction validated the computational simulation of the minimal bending form as it was dimensionally stable for a ten-day period, after which it was disassembled. The demonstrator illustrated the facility with which computationally derived, a structurally stable form could be achieved by the holographically guided, complex three-dimensional motor schema of the human craftsperson. However, the workflow traveled unidirectionally from computer to human to material: failing to fully leverage the intelligence of each modality. Subsequent research – a workshop testing human interaction with a physics engine simulation of string networks; and research on the use of HWDs to capture hand gestures in weaving seeks to develop further interactivity with rope and chord towards a bi-directional workflow within full-scale building environments.Keywords: augmented reality, cementitious composites, computational form finding, textile structures
Procedia PDF Downloads 17616691 Testing and Validation Stochastic Models in Epidemiology
Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa
Abstract:
This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions
Procedia PDF Downloads 1016690 Hybrid Finite Element Analysis of Expansion Joints for Piping Systems in Aircraft Engine External Configurations and Nuclear Power Plants
Authors: Dong Wook Lee
Abstract:
This paper presents a method to analyze the stiffness of the expansion joint with structural support using a hybrid method combining computational and analytical methods. Many expansion joints found in tubes and ducts of mechanical structures are designed to absorb thermal expansion mismatch between their structural members and deal with misalignments introduced from the assembly/manufacturing processes. One of the important design perspectives is the system’s vibrational characteristics. We calculate the stiffness as a characterization parameter for structural joint systems using a combined Finite Element Analysis (FEA) and an analytical method. We apply the methods to two sample applications: external configurations of aircraft engines and nuclear power plant structures.Keywords: expansion joint, expansion joint stiffness, finite element analysis, nuclear power plants, aircraft engine external configurations
Procedia PDF Downloads 11116689 A Novel Approach to 3D Thrust Vectoring CFD via Mesh Morphing
Authors: Umut Yıldız, Berkin Kurtuluş, Yunus Emre Muslubaş
Abstract:
Thrust vectoring, especially in military aviation, is a concept that sees much use to improve maneuverability in already agile aircraft. As this concept is fairly new and cost intensive to design and test, computational methods are useful in easing the preliminary design process. Computational Fluid Dynamics (CFD) can be utilized in many forms to simulate nozzle flow, and there exist various CFD studies in both 2D mechanical and 3D injection based thrust vectoring, and yet, 3D mechanical thrust vectoring analyses, at this point in time, are lacking variety. Additionally, the freely available test data is constrained to limited pitch angles and geometries. In this study, based on a test case provided by NASA, both steady and unsteady 3D CFD simulations are conducted to examine the aerodynamic performance of a mechanical thrust vectoring nozzle model and to validate the utilized numerical model. Steady analyses are performed to verify the flow characteristics of the nozzle at pitch angles of 0, 10 and 20 degrees, and the results are compared with experimental data. It is observed that the pressure data obtained on the inner surface of the nozzle at each specified pitch angle and under different flow conditions with pressure ratios of 1.5, 2 and 4, as well as at azimuthal angle of 0, 45, 90, 135, and 180 degrees exhibited a high level of agreement with the corresponding experimental results. To validate the CFD model, the insights from the steady analyses are utilized, followed by unsteady analyses covering a wide range of pitch angles from 0 to 20 degrees. Throughout the simulations, a mesh morphing method using a carefully calculated mathematical shape deformation model that simulates the vectored nozzle shape exactly at each point of its travel is employed to dynamically alter the divergent part of the nozzle over time within this pitch angle range. The mesh morphing based vectored nozzle shapes were compared with the drawings provided by NASA, ensuring a complete match was achieved. This computational approach allowed for the creation of a comprehensive database of results without the need to generate separate solution domains. The database contains results at every 0.01° increment of nozzle pitch angle. The unsteady analyses, generated using the morphing method, are found to be in excellent agreement with experimental data, further confirming the accuracy of the CFD model.Keywords: thrust vectoring, computational fluid dynamics, 3d mesh morphing, mathematical shape deformation model
Procedia PDF Downloads 8416688 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology
Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani
Abstract:
Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography
Procedia PDF Downloads 42416687 Efficient Model Order Reduction of Descriptor Systems Using Iterative Rational Krylov Algorithm
Authors: Muhammad Anwar, Ameen Ullah, Intakhab Alam Qadri
Abstract:
This study presents a technique utilizing the Iterative Rational Krylov Algorithm (IRKA) to reduce the order of large-scale descriptor systems. Descriptor systems, which incorporate differential and algebraic components, pose unique challenges in Model Order Reduction (MOR). The proposed method partitions the descriptor system into polynomial and strictly proper parts to minimize approximation errors, applying IRKA exclusively to the strictly adequate component. This approach circumvents the unbounded errors that arise when IRKA is directly applied to the entire system. A comparative analysis demonstrates the high accuracy of the reduced model and a significant reduction in computational burden. The reduced model enables more efficient simulations and streamlined controller designs. The study highlights IRKA-based MOR’s effectiveness in optimizing complex systems’ performance across various engineering applications. The proposed methodology offers a promising solution for reducing the complexity of large-scale descriptor systems while maintaining their essential characteristics and facilitating their analysis, simulation, and control design.Keywords: model order reduction, descriptor systems, iterative rational Krylov algorithm, interpolatory model reduction, computational efficiency, projection methods, H₂-optimal model reduction
Procedia PDF Downloads 3316686 Clustering-Based Computational Workload Minimization in Ontology Matching
Authors: Mansir Abubakar, Hazlina Hamdan, Norwati Mustapha, Teh Noranis Mohd Aris
Abstract:
In order to build a matching pattern for each class correspondences of ontology, it is required to specify a set of attribute correspondences across two corresponding classes by clustering. Clustering reduces the size of potential attribute correspondences considered in the matching activity, which will significantly reduce the computation workload; otherwise, all attributes of a class should be compared with all attributes of the corresponding class. Most existing ontology matching approaches lack scalable attributes discovery methods, such as cluster-based attribute searching. This problem makes ontology matching activity computationally expensive. It is therefore vital in ontology matching to design a scalable element or attribute correspondence discovery method that would reduce the size of potential elements correspondences during mapping thereby reduce the computational workload in a matching process as a whole. The objective of this work is 1) to design a clustering method for discovering similar attributes correspondences and relationships between ontologies, 2) to discover element correspondences by classifying elements of each class based on element’s value features using K-medoids clustering technique. Discovering attribute correspondence is highly required for comparing instances when matching two ontologies. During the matching process, any two instances across two different data sets should be compared to their attribute values, so that they can be regarded to be the same or not. Intuitively, any two instances that come from classes across which there is a class correspondence are likely to be identical to each other. Besides, any two instances that hold more similar attribute values are more likely to be matched than the ones with less similar attribute values. Most of the time, similar attribute values exist in the two instances across which there is an attribute correspondence. This work will present how to classify attributes of each class with K-medoids clustering, then, clustered groups to be mapped by their statistical value features. We will also show how to map attributes of a clustered group to attributes of the mapped clustered group, generating a set of potential attribute correspondences that would be applied to generate a matching pattern. The K-medoids clustering phase would largely reduce the number of attribute pairs that are not corresponding for comparing instances as only the coverage probability of attributes pairs that reaches 100% and attributes above the specified threshold can be considered as potential attributes for a matching. Using clustering will reduce the size of potential elements correspondences to be considered during mapping activity, which will in turn reduce the computational workload significantly. Otherwise, all element of the class in source ontology have to be compared with all elements of the corresponding classes in target ontology. K-medoids can ably cluster attributes of each class, so that a proportion of attribute pairs that are not corresponding would not be considered when constructing the matching pattern.Keywords: attribute correspondence, clustering, computational workload, k-medoids clustering, ontology matching
Procedia PDF Downloads 24816685 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 14116684 The Development of an Automated Computational Workflow to Prioritize Potential Resistance Variants in HIV Integrase Subtype C
Authors: Keaghan Brown
Abstract:
The prioritization of drug resistance mutations impacting protein folding or protein-drug and protein-DNA interactions within macromolecular systems is critical to the success of treatment regimens. With a continual increase in computational tools to assess these impacts, the need for scalability and reproducibility became an essential component of computational analysis and experimental research. Here it introduce a bioinformatics pipeline that combines several structural analysis tools in a simplified workflow, by optimizing the present computational hardware and software to automatically ease the flow of data transformations. Utilizing preestablished software tools, it was possible to develop a pipeline with a set of pre-defined functions that will automate mutation introduction into the HIV-1 Integrase protein structure, calculate the gain and loss of polar interactions and calculate the change in energy of protein fold. Additionally, an automated molecular dynamics analysis was implemented which reduces the constant need for user input and output management. The resulting pipeline, Automated Mutation Introduction and Analysis (AMIA) is an open source set of scripts designed to introduce and analyse the effects of mutations on the static protein structure as well as the results of the multi-conformational states from molecular dynamic simulations. The workflow allows the user to visualize all outputs in a user friendly manner thereby successfully enabling the prioritization of variant systems for experimental validation.Keywords: automated workflow, variant prioritization, drug resistance, HIV Integrase
Procedia PDF Downloads 7716683 Documenting the 15th Century Prints with RTI
Authors: Peter Fornaro, Lothar Schmitt
Abstract:
The Digital Humanities Lab and the Institute of Art History at the University of Basel are collaborating in the SNSF research project ‘Digital Materiality’. Its goal is to develop and enhance existing methods for the digital reproduction of cultural heritage objects in order to support art historical research. One part of the project focuses on the visualization of a small eye-catching group of early prints that are noteworthy for their subtle reliefs and glossy surfaces. Additionally, this group of objects – known as ‘paste prints’ – is characterized by its fragile state of preservation. Because of the brittle substances that were used for their production, most paste prints are heavily damaged and thus very hard to examine. These specific material properties make a photographic reproduction extremely difficult. To obtain better results we are working with Reflectance Transformation Imaging (RTI), a computational photographic method that is already used in archaeological and cultural heritage research. This technique allows documenting how three-dimensional surfaces respond to changing lighting situations. Our first results show that RTI can capture the material properties of paste prints and their current state of preservation more accurately than conventional photographs, although there are limitations with glossy surfaces because the mathematical models that are included in RTI are kept simple in order to keep the software robust and easy to use. To improve the method, we are currently developing tools for a more detailed analysis and simulation of the reflectance behavior. An enhanced analytical model for the representation and visualization of gloss will increase the significance of digital representations of cultural heritage objects. For collaborative efforts, we are working on a web-based viewer application for RTI images based on WebGL in order to make acquired data accessible to a broader international research community. At the ICDH Conference, we would like to present unpublished results of our work and discuss the implications of our concept for art history, computational photography and heritage science.Keywords: art history, computational photography, paste prints, reflectance transformation imaging
Procedia PDF Downloads 27616682 Integrating Computational Thinking into Classroom Practice – A Case Study
Authors: Diane Vassallo., Leonard Busuttil
Abstract:
Recent educational developments have seen increasing attention attributed to Computational Thinking (CT) and its integration into primary and secondary school curricula. CT is more than simply being able to use technology but encompasses fundamental Computer Science concepts which are deemed to be very important in developing the correct mindset for our future digital citizens. The case study presented in this article explores the journey of a Maltese secondary school teacher in his efforts to plan, develop and integrate CT within the context of a local classroom. The teacher participant was recruited from the Malta EU Code week summer school, a pilot initiative that stemmed from the EU Code week Team’s Train the Trainer program. The qualitative methodology involved interviews with the participant teacher as well as an analysis of the artefacts created by the students during the lessons. The results shed light on the numerous challenges and obstacles that the teacher encountered in his integration of CT, as well as portray some brilliant examples of good practices which can substantially inform further research and practice around the integration of CT in classroom practice.Keywords: computational thinking, digital citizens, digital literacy, technology integration
Procedia PDF Downloads 15416681 Numerical Investigation of Cavitation on Different Venturi Shapes by Computational Fluid Dynamics
Authors: Sedat Yayla, Mehmet Oruc, Shakhwan Yaseen
Abstract:
Cavitation phenomena might rigorously impair machine parts such as pumps, propellers and impellers or devices as the pressure in the fluid declines under the liquid's saturation pressure. To evaluate the influence of cavitation, in this research two-dimensional computational fluid dynamics (CFD) venturi models with variety of inlet pressure values, throat lengths and vapor fluid contents were applied. In this research three different vapor contents (0%, 5% 10%), four inlet pressures (2, 4, 6, 8 and 10 atm) and two venturi models were employed at different throat lengths ( 5, 10, 15 and 20 mm) for discovering the impact of each parameter on the cavitation number. It is uncovered that there is a positive correlation between pressure inlet and vapor fluid content and cavitation number. Furthermore, it is unveiled that velocity remains almost constant at the inlet pressures of 6, 8,10atm, nevertheless increasing the length of throat results in the substantial escalation in the velocity of the throat at inlet pressures of 2 and 4 atm. Furthermore, velocity and cavitation number were negatively correlated. The results of the cavitation number varied between 0.092 and 0.495 depending upon the velocity values of the throat.Keywords: cavitation number, computational fluid dynamics, mixture of fluid, two-phase flow, velocity of throat
Procedia PDF Downloads 40216680 Conjugate Heat Transfer Analysis of a Combustion Chamber using ANSYS Computational Fluid Dynamics to Estimate the Thermocouple Positioning in a Chamber Wall
Authors: Muzna Tariq, Ihtzaz Qamar
Abstract:
In most engineering cases, the working temperatures inside a combustion chamber are high enough that they lie beyond the operational range of thermocouples. Furthermore, design and manufacturing limitations restrict the use of internal thermocouples in many applications. Heat transfer inside a combustion chamber is caused due to interaction of the post-combustion hot fluid with the chamber wall. Heat transfer that involves an interaction between the fluid and solid is categorized as Conjugate Heat Transfer (CHT). Therefore, to satisfy the needs of CHT, CHT Analysis is performed by using ANSYS CFD tool to estimate theoretically precise thermocouple positions at the combustion chamber wall where excessive temperatures (beyond thermocouple range) can be avoided. In accordance with these Computational Fluid Dynamics (CFD) results, a combustion chamber is designed, and a prototype is manufactured with multiple thermocouple ports positioned at the specified distances so that the temperature of hot gases can be measured on the chamber wall where the temperatures do not exceed the thermocouple working range.Keywords: computational fluid dynamics, conduction, conjugate heat transfer, convection, fluid flow, thermocouples
Procedia PDF Downloads 14716679 Computationally Efficient Stacking Sequence Blending for Composite Structures with a Large Number of Design Regions Using Cellular Automata
Authors: Ellen Van Den Oord, Julien Marie Jan Ferdinand Van Campen
Abstract:
This article introduces a computationally efficient method for stacking sequence blending of composite structures. The computational efficiency makes the presented method especially interesting for composite structures with a large number of design regions. Optimization of composite structures with an unequal load distribution may lead to locally optimized thicknesses and ply orientations that are incompatible with one another. Blending constraints can be enforced to achieve structural continuity. In literature, many methods can be found to implement structural continuity by means of stacking sequence blending in one way or another. The complexity of the problem makes the blending of a structure with a large number of adjacent design regions, and thus stacking sequences, prohibitive. In this work the local stacking sequence optimization is preconditioned using a method found in the literature that couples the mechanical behavior of the laminate, in the form of lamination parameters, to blending constraints, yielding near-optimal easy-to-blend designs. The preconditioned design is then fed to the scheme using cellular automata that have been developed by the authors. The method is applied to the benchmark 18-panel horseshoe blending problem to demonstrate its performance. The computational efficiency of the proposed method makes it especially suited for composite structures with a large number of design regions.Keywords: composite, blending, optimization, lamination parameters
Procedia PDF Downloads 22916678 Computational Fluid Dynamics Analysis of Convergent–Divergent Nozzle and Comparison against Theoretical and Experimental Results
Authors: Stewart A. Keir, Faik A. Hamad
Abstract:
This study aims to use both analytical and experimental methods of analysis to examine the accuracy of Computational Fluid Dynamics (CFD) models that can then be used for more complex analyses, accurately representing more elaborate flow phenomena such as internal shockwaves and boundary layers. The geometry used in the analytical study and CFD model is taken from the experimental rig. The analytical study is undertaken using isentropic and adiabatic relationships and the output of the analytical study, the 'shockwave location tool', is created. The results from the analytical study are then used to optimize the redesign an experimental rig for more favorable placement of pressure taps and gain a much better representation of the shockwaves occurring in the divergent section of the nozzle. The CFD model is then optimized through the selection of different parameters, e.g. turbulence models (Spalart-Almaras, Realizable k-epsilon & Standard k-omega) in order to develop an accurate, robust model. The results from the CFD model can then be directly compared to experimental and analytical results in order to gauge the accuracy of each method of analysis. The CFD model will be used to visualize the variation of various parameters such as velocity/Mach number, pressure and turbulence across the shock. The CFD results will be used to investigate the interaction between the shock wave and the boundary layer. The validated model can then be used to modify the nozzle designs which may offer better performance and ease of manufacture and may present feasible improvements to existing high-speed flow applications.Keywords: CFD, nozzle, fluent, gas dynamics, shock-wave
Procedia PDF Downloads 23416677 Optimal Scheduling of Trains in Complex National Scale Railway Networks
Authors: Sanat Ramesh, Tarun Dutt, Abhilasha Aswal, Anushka Chandrababu, G. N. Srinivasa Prasanna
Abstract:
Optimal Schedule Generation for a large national railway network operating thousands of passenger trains with tens of thousands of kilometers of track is a grand computational challenge in itself. We present heuristics based on a Mixed Integer Program (MIP) formulation for local optimization. These methods provide flexibility in scheduling new trains with varying speed and delays and improve utilization of infrastructure. We propose methods that provide a robust solution with hundreds of trains being scheduled over a portion of the railway network without significant increases in delay. We also provide techniques to validate the nominal schedules thus generated over global correlated variations in travel times thereby enabling us to detect conflicts arising due to delays. Our validation results which assume only the support of the arrival and departure time distributions takes an order of few minutes for a portion of the network and is computationally efficient to handle the entire network.Keywords: mixed integer programming, optimization, railway network, train scheduling
Procedia PDF Downloads 15916676 Julia-Based Computational Tool for Composite System Reliability Assessment
Authors: Josif Figueroa, Kush Bubbar, Greg Young-Morris
Abstract:
The reliability evaluation of composite generation and bulk transmission systems is crucial for ensuring a reliable supply of electrical energy to significant system load points. However, evaluating adequacy indices using probabilistic methods like sequential Monte Carlo Simulation can be computationally expensive. Despite this, it is necessary when time-varying and interdependent resources, such as renewables and energy storage systems, are involved. Recent advances in solving power network optimization problems and parallel computing have improved runtime performance while maintaining solution accuracy. This work introduces CompositeSystems, an open-source Composite System Reliability Evaluation tool developed in Julia™, to address the current deficiencies of commercial and non-commercial tools. This work introduces its design, validation, and effectiveness, which includes analyzing two different formulations of the Optimal Power Flow problem. The simulations demonstrate excellent agreement with existing published studies while improving replicability and reproducibility. Overall, the proposed tool can provide valuable insights into the performance of transmission systems, making it an important addition to the existing toolbox for power system planning.Keywords: open-source software, composite system reliability, optimization methods, Monte Carlo methods, optimal power flow
Procedia PDF Downloads 7516675 Quantification of Aerodynamic Variables Using Analytical Technique and Computational Fluid Dynamics
Authors: Adil Loya, Kamran Maqsood, Muhammad Duraid
Abstract:
Aerodynamic stability coefficients are necessary to be known before any unmanned aircraft flight is performed. This requires expertise on aerodynamics and stability control of the aircraft. To enable efficacious performance of aircraft requires that a well-defined flight path and aerodynamics should be defined beforehand. This paper presents a study on the aerodynamics of an unmanned aero vehicle (UAV) during flight conditions. Current research holds comparative studies of different parameters for flight aerodynamic, measured using two different open source analytical software programs. These software packages are DATCOM and XLRF5, which help in depicting the flight aerodynamic variables. Computational fluid dynamics (CFD) was also used to perform aerodynamic analysis for which Star CCM+ was used. Output trends of the study demonstrate high accuracies between the two software programs with that of CFD. It can be seen that the Coefficient of Lift (CL) obtained from DATCOM and XFLR is similar to CL of CFD simulation. In the similar manner, other potential aerodynamic stability parameters obtained from analytical software are in good agreement with CFD.Keywords: XFLR5, DATCOM, computational fluid dynamic, unmanned aero vehicle
Procedia PDF Downloads 29816674 A Computational Study of the Effect of Intake Design on Volumetric Efficiency for Best Performance in Motorsport
Authors: Dominic Wentworth-Linton, Shian Gao
Abstract:
This project was aimed at investigating the effect of velocity stacks on the intakes of internal combustion engines for motorsport applications. The intake systems in motorsport are predominantly fuel injection with a plate mounted for the stacks. Using Computational Fluid Dynamics software, the relationship between the stack length and power and torque delivery across the engine’s rev range was investigated and the results were used to choose the best option for its intended motorsport discipline. The test results are expected to vary with engine geometry and its natural manufacturer characteristics. The test was also relevant in bridging between computational data and real simulation as the results show flow, pressure and velocity readings but the behaviour of the engine is inferred from the nature of each test. The results of the data analysis were tested in a real-life simulation on a dynamometer to prove the theory of stack length on power and torque delivery, which helps determine the most suitable stack for the Vauxhall engine for rallying in the Caribbean.Keywords: CFD simulation, Internal combustion engine, Intake system, Dynamometer test
Procedia PDF Downloads 28416673 Malaria Parasite Detection Using Deep Learning Methods
Authors: Kaustubh Chakradeo, Michael Delves, Sofya Titarenko
Abstract:
Malaria is a serious disease which affects hundreds of millions of people around the world, each year. If not treated in time, it can be fatal. Despite recent developments in malaria diagnostics, the microscopy method to detect malaria remains the most common. Unfortunately, the accuracy of microscopic diagnostics is dependent on the skill of the microscopist and limits the throughput of malaria diagnosis. With the development of Artificial Intelligence tools and Deep Learning techniques in particular, it is possible to lower the cost, while achieving an overall higher accuracy. In this paper, we present a VGG-based model and compare it with previously developed models for identifying infected cells. Our model surpasses most previously developed models in a range of the accuracy metrics. The model has an advantage of being constructed from a relatively small number of layers. This reduces the computer resources and computational time. Moreover, we test our model on two types of datasets and argue that the currently developed deep-learning-based methods cannot efficiently distinguish between infected and contaminated cells. A more precise study of suspicious regions is required.Keywords: convolution neural network, deep learning, malaria, thin blood smears
Procedia PDF Downloads 13116672 Technology in the Calculation of People Health Level: Design of a Computational Tool
Authors: Sara Herrero Jaén, José María Santamaría García, María Lourdes Jiménez Rodríguez, Jorge Luis Gómez González, Adriana Cercas Duque, Alexandra González Aguna
Abstract:
Background: Health concept has evolved throughout history. The health level is determined by the own individual perception. It is a dynamic process over time so that you can see variations from one moment to the next. In this way, knowing the health of the patients you care for, will facilitate decision making in the treatment of care. Objective: To design a technological tool that calculates the people health level in a sequential way over time. Material and Methods: Deductive methodology through text analysis, extraction and logical knowledge formalization and education with expert group. Studying time: September 2015- actually. Results: A computational tool for the use of health personnel has been designed. It has 11 variables. Each variable can be given a value from 1 to 5, with 1 being the minimum value and 5 being the maximum value. By adding the result of the 11 variables we obtain a magnitude in a certain time, the health level of the person. The health calculator allows to represent people health level at a time, establishing temporal cuts being useful to determine the evolution of the individual over time. Conclusion: The Information and Communication Technologies (ICT) allow training and help in various disciplinary areas. It is important to highlight their relevance in the field of health. Based on the health formalization, care acts can be directed towards some of the propositional elements of the concept above. The care acts will modify the people health level. The health calculator allows the prioritization and prediction of different strategies of health care in hospital units.Keywords: calculator, care, eHealth, health
Procedia PDF Downloads 26516671 Computational Study of Composite Films
Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova
Abstract:
Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.Keywords: composite films, computer modelling, image analysis, nanocomposite films
Procedia PDF Downloads 39316670 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 9516669 Numerical Simulation of a Point Absorber Wave Energy Converter Using OpenFOAM in Indian Scenario
Authors: Pooja Verma, Sumana Ghosh
Abstract:
There is a growing need for alternative way of power generation worldwide. The reason can be attributed to limited resources of fossil fuels, environmental pollution, increasing cost of conventional fuels, and lower efficiency of conversion of energy in existing systems. In this context, one of the potential alternatives for power generation is wave energy. However, it is difficult to estimate the amount of electrical energy generation in an irregular sea condition by experiment and or analytical methods. Therefore in this work, a numerical wave tank is developed using the computational fluid dynamics software Open FOAM. In this software a specific utility known as waves2Foam utility is being used to carry out the simulation work. The computational domain is a tank of dimension: 5m*1.5m*1m with a floating object of dimension: 0.5m*0.2m*0.2m. Regular waves are generated at the inlet of the wave tank according to Stokes second order theory. The main objective of the present study is to validate the numerical model against existing experimental data. It shows a good matching with the existing experimental data of floater displacement. Later the model is exploited to estimate energy extraction due to the movement of such a point absorber in real sea conditions. Scale down the wave properties like wave height, wave length, etc. are used as input parameters. Seasonal variations are also considered.Keywords: OpenFOAM, numerical wave tank, regular waves, floating object, point absorber
Procedia PDF Downloads 35316668 Robust Numerical Scheme for Pricing American Options under Jump Diffusion Models
Authors: Salah Alrabeei, Mohammad Yousuf
Abstract:
The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. However, most of the option pricing models have no analytical solution. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, we solve the American option under jump diffusion models by using efficient time-dependent numerical methods. several techniques are integrated to reduced the overcome the computational complexity. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). Partial fraction decomposition technique is applied to rational approximation schemes to overcome the complexity of inverting polynomial of matrices. The proposed method is easy to implement on serial or parallel versions. Numerical results are presented to prove the accuracy and efficiency of the proposed method.Keywords: integral differential equations, jump–diffusion model, American options, rational approximation
Procedia PDF Downloads 12216667 Investigation of the Capability of REALP5 to Solve Complex Fuel Geometry
Authors: D. Abdelrazek, M. NaguibAly, A. A. Badawi, Asmaa G. Abo Elnour, A. A. El-Kafas
Abstract:
This work is developed within IAEA Coordinated Research Program 1496, “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal-hydraulic computational methods and tools for operation and safety analysis of research reactors.” The study investigates the capability of Code RELAP5/Mod3.4 to solve complex geometry complexity. Its results are compared to the results of PARET, a common code in thermal hydraulic analysis for research reactors, belonging to MTR-PC groups. The WWR-SM reactor at the Institute of Nuclear Physics (INP) in the Republic of Uzbekistan is simulated using both PARET and RELAP5 at steady state. Results from the two codes are compared. REALP5 code succeeded in solving the complex fuel geometry. The PARET code needed some calculations to obtain the final result. Although the final results from the PARET are more accurate, the small differences in both results makes using RELAP5 code recommended in case of complex fuel assemblies.Keywords: complex fuel geometry, PARET, RELAP5, WWR-SM reactor
Procedia PDF Downloads 33316666 Compressible Lattice Boltzmann Method for Turbulent Jet Flow Simulations
Authors: K. Noah, F.-S. Lien
Abstract:
In Computational Fluid Dynamics (CFD), there are a variety of numerical methods, of which some depend on macroscopic model representatives. These models can be solved by finite-volume, finite-element or finite-difference methods on a microscopic description. However, the lattice Boltzmann method (LBM) is considered to be a mesoscopic particle method, with its scale lying between the macroscopic and microscopic scales. The LBM works well for solving incompressible flow problems, but certain limitations arise from solving compressible flows, particularly at high Mach numbers. An improved lattice Boltzmann model for compressible flow problems is presented in this research study. A higher-order Taylor series expansion of the Maxwell equilibrium distribution function is used to overcome limitations in LBM when solving high-Mach-number flows. Large eddy simulation (LES) is implemented in LBM to simulate turbulent jet flows. The results have been validated with available experimental data for turbulent compressible free jet flow at subsonic speeds.Keywords: compressible lattice Boltzmann method, multiple relaxation times, large eddy simulation, turbulent jet flows
Procedia PDF Downloads 27416665 Using Computational Fluid Dynamics to Model and Design a Preventative Application for Strong Wind
Authors: Ming-Hwi Yao, Su-Szu Yang
Abstract:
Typhoons are one of the major types of disasters that affect Taiwan each year and that cause severe damage to agriculture. Indeed, the damage exacted during a typical typhoon season can be up to $1 billion, and is responsible for nearly 75% of yearly agricultural losses. However, there is no consensus on how to reduce the damage caused by the strong winds and heavy precipitation engendered by typhoons. One suggestion is the use of windbreak nets, which are a low-cost and easy-to-use disaster mitigation strategy for crop production. In the present study, we conducted an evaluation to determine the optimal conditions of a windbreak net by using a computational fluid dynamics (CFD) model. This model may be used as a reference for crop protection. The results showed that CFD simulation validated windbreak nets of different mesh sizes and heights in the experimental area; thus, CFD is an efficient tool for evaluating the effectiveness of windbreak nets. Specifically, the effective wind protection length and height were found to be 6 and 1.3 times the length and height of the windbreak net, respectively. During a real typhoon, maximum wind gusts of 18 m s-1 can be reduced to 4 m s-1 by using a windbreak net that has a 70% blocking rate. In short, windbreak nets are significantly effective in protecting typhoon-affected areas.Keywords: computational fluid dynamics, disaster, typhoon, windbreak net
Procedia PDF Downloads 19216664 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models
Authors: Yoonsuh Jung
Abstract:
As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an "optimal" value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.Keywords: cross validation, parameter averaging, parameter selection, regularization parameter search
Procedia PDF Downloads 41616663 Towards a Computational Model of Consciousness: Global Abstraction Workspace
Authors: Halim Djerroud, Arab Ali Cherif
Abstract:
We assume that conscious functions are implemented automatically. In other words that consciousness as well as the non-consciousness aspect of human thought, planning, and perception, are produced by biologically adaptive algorithms. We propose that the mechanisms of consciousness can be produced using similar adaptive algorithms to those executed by the mechanism. In this paper, we propose a computational model of consciousness, the ”Global Abstraction Workspace” which is an internal environmental modelling perceived as a multi-agent system. This system is able to evolve and generate new data and processes as well as actions in the environment.Keywords: artificial consciousness, cognitive architecture, global abstraction workspace, multi-agent system
Procedia PDF Downloads 341