Search results for: computational fluid mechanics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3906

Search results for: computational fluid mechanics

2376 Computational Aerodynamic Shape Optimisation Using a Concept of Control Nodes and Modified Cuckoo Search

Authors: D. S. Naumann, B. J. Evans, O. Hassan

Abstract:

This paper outlines the development of an automated aerodynamic optimisation algorithm using a novel method of parameterising a computational mesh by employing user–defined control nodes. The shape boundary movement is coupled to the movement of the novel concept of the control nodes via a quasi-1D-linear deformation. Additionally, a second order smoothing step has been integrated to act on the boundary during the mesh movement based on the change in its second derivative. This allows for both linear and non-linear shape transformations dependent on the preference of the user. The domain mesh movement is then coupled to the shape boundary movement via a Delaunay graph mapping. A Modified Cuckoo Search (MCS) algorithm is used for optimisation within the prescribed design space defined by the allowed range of control node displacement. A finite volume compressible NavierStokes solver is used for aerodynamic modelling to predict aerodynamic design fitness. The resulting coupled algorithm is applied to a range of test cases in two dimensions including the design of a subsonic, transonic and supersonic intake and the optimisation approach is compared with more conventional optimisation strategies. Ultimately, the algorithm is tested on a three dimensional wing optimisation case.

Keywords: mesh movement, aerodynamic shape optimization, cuckoo search, shape parameterisation

Procedia PDF Downloads 336
2375 Information Theoretic Approach for Beamforming in Wireless Communications

Authors: Syed Khurram Mahmud, Athar Naveed, Shoaib Arif

Abstract:

Beamforming is a signal processing technique extensively utilized in wireless communications and radars for desired signal intensification and interference signal minimization through spatial selectivity. In this paper, we present a method for calculation of optimal weight vectors for smart antenna array, to achieve a directive pattern during transmission and selective reception in interference prone environment. In proposed scheme, Mutual Information (MI) extrema are evaluated through an energy constrained objective function, which is based on a-priori information of interference source and desired array factor. Signal to Interference plus Noise Ratio (SINR) performance is evaluated for both transmission and reception. In our scheme, MI is presented as an index to identify trade-off between information gain, SINR, illumination time and spatial selectivity in an energy constrained optimization problem. The employed method yields lesser computational complexity, which is presented through comparative analysis with conventional methods in vogue. MI based beamforming offers enhancement of signal integrity in degraded environment while reducing computational intricacy and correlating key performance indicators.

Keywords: beamforming, interference, mutual information, wireless communications

Procedia PDF Downloads 278
2374 Mechanical Properties of Biological Tissues

Authors: Young June Yoon

Abstract:

We will present four different topics in estimating the mechanical properties of biological tissues. First we elucidate the viscoelastic behavior of collagen molecules whose diameter is a couple of nanometers. By using the molecular dynamics simulation, we observed the viscoelastic behavior in different pulling velocity. Second, the protein layer, so called ‘sheath’ in enamel microstructure reduces the stress concentration in enamel minerals. We examined the result by using the finite element methods. Third, the anisotropic elastic constants of dentin are estimated by micromechanical analysis and estimated results are close to the experimentally measured data. Last, new formulation between the fabric tensor and the wave velocity is established for calcaneus by employing the poroelasticity. This formulation can be simply used for future experiments.

Keywords: tissues, mechanics, mechanical properties, wave propagation

Procedia PDF Downloads 368
2373 Numerical Analysis of a Strainer Using Porous Media Technique

Authors: Ji-Hoon Byeon, Kwon-Hee Lee

Abstract:

Strainer filter serves to block the inflow of impurities while mixed fluid is entering or exiting the piping. The filter of the strainer has a perforated structure, so that the pressure drop and the velocity change necessarily occur when the mixed fluid passes through the filter. It is possible to predict the pressure drop and velocity change of the strainer by numerical analysis by implementing all the perforated plates. However, if the size of the perforated plate exceeds a certain size, it is difficult to perform the numerical analysis, and sometimes we cannot guarantee its accuracy. In this study, we tried to predict the pressure drop and velocity change by using the porous media technique to obtain the equivalent resistance without actual implementation of the perforation shape of the strainer. Ansys-CFX, a commercial software, is used to perform the numerical analysis. The analysis procedure is as follows. Firstly, the unit pattern of the perforated plate is modeled, and the pressure drop is analyzed by varying the velocity by symmetry of the wall surface. Secondly, since the equation for obtaining resistance is a quadratic equation of pressure having unknown velocity, the viscous resistance and the inertia resistance of the perforated plate are obtained from the relationship between pressure and speed. Thirdly, by using the calculated resistance values, the values are substituted into the flat plate implemented as a two-dimensional porous media, and the accuracy is verified by comparing the pressure drop and the velocity change. Fourthly, the pressure drop and velocity change in the whole strainer are analyzed by using the resistance values obtained on the perforated plate in the actual whole strainer model. Using the porous media technique, it is found that pressure drop and velocity change can be predicted in relatively short time without modeling the overall shape of the filter. Acknowledgements: This work was supported by the Valve Center from the Regional Innovation Center(RIC) Program of Ministry of Trade, Industry & Energy (MOTIE).

Keywords: strainer, porous media, CFD, numerical analysis

Procedia PDF Downloads 367
2372 Experimental Investigation on Tensile Durability of Glass Fiber Reinforced Polymer (GFRP) Rebar Embedded in High Performance Concrete

Authors: Yuan Yue, Wen-Wei Wang

Abstract:

The objective of this research is to comprehensively evaluate the impact of alkaline environments on the durability of Glass Fiber Reinforced Polymer (GFRP) reinforcements in concrete structures and further explore their potential value within the construction industry. Specifically, we investigate the effects of two widely used high-performance concrete (HPC) materials on the durability of GFRP bars when embedded within them under varying temperature conditions. A total of 279 GFRP bar specimens were manufactured for microcosmic and mechanical performance tests. Among them, 270 specimens were used to test the residual tensile strength after 120 days of immersion, while 9 specimens were utilized for microscopic testing to analyze degradation damage. SEM techniques were employed to examine the microstructure of GFRP and cover concrete. Unidirectional tensile strength experiments were conducted to determine the remaining tensile strength after corrosion. The experimental variables consisted of four types of concrete (engineering cementitious composite (ECC), ultra-high-performance concrete (UHPC), and two types of ordinary concrete with different compressive strengths) as well as three acceleration temperatures (20, 40, and 60℃). The experimental results demonstrate that high-performance concrete (HPC) offers superior protection for GFRP bars compared to ordinary concrete. Two types of HPC enhance durability through different mechanisms: one by reducing the pH of the concrete pore fluid and the other by decreasing permeability. For instance, ECC improves embedded GFRP's durability by lowering the pH of the pore fluid. After 120 days of immersion at 60°C under accelerated conditions, ECC (pH=11.5) retained 68.99% of its strength, while PC1 (pH=13.5) retained 54.88%. On the other hand, UHPC enhances FRP steel's durability by increasing porosity and compactness in its protective layer to reinforce FRP reinforcement's longevity. Due to fillers present in UHPC, it typically exhibits lower porosity, higher densities, and greater resistance to permeation compared to PC2 with similar pore fluid pH levels, resulting in varying degrees of durability for GFRP bars embedded in UHPC and PC2 after 120 days of immersion at a temperature of 60°C - with residual strengths being 66.32% and 60.89%, respectively. Furthermore, SEM analysis revealed no noticeable evidence indicating fiber deterioration in any examined specimens, thus suggesting that uneven stress distribution resulting from interface segregation and matrix damage emerges as a primary causative factor for tensile strength reduction in GFRP rather than fiber corrosion. Moreover, long-term prediction models were utilized to calculate residual strength values over time for reinforcement embedded in HPC under high temperature and high humidity conditions - demonstrating that approximately 75% of its initial strength was retained by reinforcement embedded in HPC after 100 years of service.

Keywords: GFRP bars, HPC, degeneration, durability, residual tensile strength.

Procedia PDF Downloads 55
2371 Computational Team Dynamics in Student New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

Teamwork is an extremely effective pedagogical tool in engineering education. New Product Development (NPD) has been an effective strategy of companies to streamline and bring innovative products and solutions to customers. Thus, Engineering curriculum in many schools, some collaboratively with business schools, have brought NPD into the curriculum at the graduate level. Teamwork is invariably used during instruction, where students work in teams to come up with new products and solutions. There is a significant emphasis of grade on the semester long teamwork for it to be taken seriously by students. As the students work in teams and go through this process to develop the new product prototypes, their effectiveness and learning to a great extent depends on how they function as a team and go through the creative process, come together, and work towards the common goal. A core attribute of a successful NPD team is their creativity and innovation. The team needs to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas resulting in a POC (proof-of-concept) implementation or a prototype of the product. The simultaneous requirement of teams to be creative and at the same time also converge and work together imposes different types of tensions in their team interactions. These ideational tensions / conflicts and sometimes relational tensions / conflicts are inevitable. Effective teams will have to deal with the Team dynamics and manage it to be resilient enough and yet be creative. This research paper provides a computational analysis of the teams’ communication that is reflective of the team dynamics, and through a superimposition of latent semantic analysis with social network analysis, provides a computational methodology of arriving at patterns of visual interaction. These team interaction patterns have clear correlations to the team dynamics and provide insights into the functioning and thus the effectiveness of the teams. 23 student NPD teams over 2 years of a course on Managing NPD that has a blend of engineering and business school students is considered, and the results are presented. It is also correlated with the teams’ detailed and tailored individual and group feedback and self-reflection and evaluation questionnaire.

Keywords: team dynamics, social network analysis, team interaction patterns, new product development teamwork, NPD teams

Procedia PDF Downloads 114
2370 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 164
2369 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition

Authors: M. Beusink, E. W. C. Coenen

Abstract:

The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.

Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures

Procedia PDF Downloads 233
2368 Modeling the Human Harbor: An Equity Project in New York City, New York USA

Authors: Lauren B. Birney

Abstract:

The envisioned long-term outcome of this three-year research, and implementation plan is for 1) teachers and students to design and build their own computational models of real-world environmental-human health phenomena occurring within the context of the “Human Harbor” and 2) project researchers to evaluate the degree to which these integrated Computer Science (CS) education experiences in New York City (NYC) public school classrooms (PreK-12) impact students’ computational-technical skill development, job readiness, career motivations, and measurable abilities to understand, articulate, and solve the underlying phenomena at the center of their models. This effort builds on the partnership’s successes over the past eight years in developing a benchmark Model of restoration-based Science, Technology, Engineering, and Math (STEM) education for urban public schools and achieving relatively broad-based implementation in the nation’s largest public school system. The Billion Oyster Project Curriculum and Community Enterprise for Restoration Science (BOP-CCERS STEM + Computing) curriculum, teacher professional developments, and community engagement programs have reached more than 200 educators and 11,000 students at 124 schools, with 84 waterfront locations and Out of School of Time (OST) programs. The BOP-CCERS Partnership is poised to develop a more refined focus on integrating computer science across the STEM domains; teaching industry-aligned computational methods and tools; and explicitly preparing students from the city’s most under-resourced and underrepresented communities for upwardly mobile careers in NYC’s ever-expanding “digital economy,” in which jobs require computational thinking and an increasing percentage require discreet computer science technical skills. Project Objectives include the following: 1. Computational Thinking (CT) Integration: Integrate computational thinking core practices across existing middle/high school BOP-CCERS STEM curriculum as a means of scaffolding toward long term computer science and computational modeling outcomes. 2. Data Science and Data Analytics: Enabling Researchers to perform interviews with Teachers, students, community members, partners, stakeholders, and Science, Technology, Engineering, and Mathematics (STEM) industry Professionals. Collaborative analysis and data collection were also performed. As a centerpiece, the BOP-CCERS partnership will expand to include a dedicated computer science education partner. New York City Department of Education (NYCDOE), Computer Science for All (CS4ALL) NYC will serve as the dedicated Computer Science (CS) lead, advising the consortium on integration and curriculum development, working in tandem. The BOP-CCERS Model™ also validates that with appropriate application of technical infrastructure, intensive teacher professional developments, and curricular scaffolding, socially connected science learning can be mainstreamed in the nation’s largest urban public school system. This is evidenced and substantiated in the initial phases of BOP-CCERS™. The BOP-CCERS™ student curriculum and teacher professional development have been implemented in approximately 24% of NYC public middle schools, reaching more than 250 educators and 11,000 students directly. BOP-CCERS™ is a fully scalable and transferable educational model, adaptable to all American school districts. In all settings of the proposed Phase IV initiative, the primary beneficiary group will be underrepresented NYC public school students who live in high-poverty neighborhoods and are traditionally underrepresented in the STEM fields, including African Americans, Latinos, English language learners, and children from economically disadvantaged households. In particular, BOP-CCERS Phase IV will explicitly prepare underrepresented students for skilled positions within New York City’s expanding digital economy, computer science, computational information systems, and innovative technology sectors.

Keywords: computer science, data science, equity, diversity and inclusion, STEM education

Procedia PDF Downloads 58
2367 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output

Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin

Abstract:

With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.

Keywords: channel estimation, LMMSE, LS, MIMO, MMSE

Procedia PDF Downloads 190
2366 Multiscale Process Modeling of Ceramic Matrix Composites

Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya

Abstract:

Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.

Keywords: digital engineering, finite elements, manufacturing, molecular dynamics

Procedia PDF Downloads 97
2365 Creating and Questioning Research-Oriented Digital Outputs to Manuscript Metadata: A Case-Based Methodological Investigation

Authors: Diandra Cristache

Abstract:

The transition of traditional manuscript studies into the digital framework closely affects the methodological premises upon which manuscript descriptions are modeled, created, and questioned for the purpose of research. This paper intends to explore the issue by presenting a methodological investigation into the process of modeling, creating, and questioning manuscript metadata. The investigation is founded on a close observation of the Polonsky Greek Manuscripts Project, a collaboration between the Universities of Cambridge and Heidelberg. More than just providing a realistic ground for methodological exploration, along with a complete metadata set for computational demonstration, the case study also contributes to a broader purpose: outlining general methodological principles for making the most out of manuscript metadata by means of research-oriented digital outputs. The analysis mainly focuses on the scholarly approach to manuscript descriptions, in the specific instance where the act of metadata recording does not have a programmatic research purpose. Close attention is paid to the encounter of 'traditional' practices in manuscript studies with the formal constraints of the digital framework: does the shift in practices (especially from the straight narrative of free writing towards the hierarchical constraints of the TEI encoding model) impact the structure of metadata and its capability to respond specific research questions? It is argued that flexible structure of TEI and traditional approaches to manuscript description lead to a proliferation of markup: does an 'encyclopedic' descriptive approach ensure the epistemological relevance of the digital outputs to metadata? To provide further insight on the computational approach to manuscript metadata, the metadata of the Polonsky project are processed with techniques of distant reading and data networking, thus resulting in a new group of digital outputs (relational graphs, geographic maps). The computational process and the digital outputs are thoroughly illustrated and discussed. Eventually, a retrospective analysis evaluates how the digital outputs respond to the scientific expectations of research, and the other way round, how the requirements of research questions feed back into the creation and enrichment of metadata in an iterative loop.

Keywords: digital manuscript studies, digital outputs to manuscripts metadata, metadata interoperability, methodological issues

Procedia PDF Downloads 139
2364 Comparison between High Resolution Ultrasonography and Magnetic Resonance Imaging in Assessment of Musculoskeletal Disorders Causing Ankle Pain

Authors: Engy S. El-Kayal, Mohamed M. S. Arafa

Abstract:

There are various causes of ankle pain including traumatic and non-traumatic causes. Various imaging techniques are available for assessment of AP. MRI is considered to be the imaging modality of choice for ankle joint evaluation with an advantage of its high spatial resolution, multiplanar capability, hence its ability to visualize small complex anatomical structures around the ankle. However, the high costs and the relatively limited availability of MRI systems, as well as the relatively long duration of the examination all are considered disadvantages of MRI examination. Therefore there is a need for a more rapid and less expensive examination modality with good diagnostic accuracy to fulfill this gap. HRU has become increasingly important in the assessment of ankle disorders, with advantages of being fast, reliable, of low cost and readily available. US can visualize detailed anatomical structures and assess tendinous and ligamentous integrity. The aim of this study was to compare the diagnostic accuracy of HRU with MRI in the assessment of patients with AP. We included forty patients complaining of AP. All patients were subjected to real-time HRU and MRI of the affected ankle. Results of both techniques were compared to surgical and arthroscopic findings. All patients were examined according to a defined protocol that includes imaging the tendon tears or tendinitis, muscle tears, masses, or fluid collection, ligament sprain or tears, inflammation or fluid effusion within the joint or bursa, bone and cartilage lesions, erosions and osteophytes. Analysis of the results showed that the mean age of patients was 38 years. The study comprised of 24 women (60%) and 16 men (40%). The accuracy of HRU in detecting causes of AP was 85%, while the accuracy of MRI in the detection of causes of AP was 87.5%. In conclusions: HRU and MRI are two complementary tools of investigation with the former will be used as a primary tool of investigation and the latter will be used to confirm the diagnosis and the extent of the lesion especially when surgical interference is planned.

Keywords: ankle pain (AP), high-resolution ultrasound (HRU), magnetic resonance imaging (MRI) ultrasonography (US)

Procedia PDF Downloads 188
2363 Computational System for the Monitoring Ecosystem of the Endangered White Fish (Chirostoma estor estor) in the Patzcuaro Lake, Mexico

Authors: Cesar Augusto Hoil Rosas, José Luis Vázquez Burgos, José Juan Carbajal Hernandez

Abstract:

White fish (Chirostoma estor estor) is an endemic species that habits in the Patzcuaro Lake, located in Michoacan, Mexico; being an important source of gastronomic and cultural wealth of the area. Actually, it have undergone an immense depopulation of individuals, due to the high fishing, contamination and eutrophication of the lake water, resulting in the possible extinction of this important species. This work proposes a new computational model for monitoring and assessment of critical environmental parameters of the white fish ecosystem. According to an Analytical Hierarchy Process, a mathematical model is built assigning weights to each environmental parameter depending on their water quality importance on the ecosystem. Then, a development of an advanced system for the monitoring, analysis and control of water quality is built using the virtual environment of LabVIEW. As results, we have obtained a global score that indicates the condition level of the water quality in the Chirostoma estor ecosystem (excellent, good, regular and poor), allowing to provide an effective decision making about the environmental parameters that affect the proper culture of the white fish such as temperature, pH and dissolved oxygen. In situ evaluations show regular conditions for a success reproduction and growth rates of this species where the water quality tends to have regular levels. This system emerges as a suitable tool for the water management, where future laws for white fish fishery regulations will result in the reduction of the mortality rate in the early stages of development of the species, which represent the most critical phase. This can guarantees better population sizes than those currently obtained in the aquiculture crop. The main benefit will be seen as a contribution to maintain the cultural and gastronomic wealth of the area and for its inhabitants, since white fish is an important food and economical income of the region, but the species is endangered.

Keywords: Chirostoma estor estor, computational system, lab view, white fish

Procedia PDF Downloads 322
2362 High Thrust Upper Stage Solar Hydrogen Rocket Design

Authors: Maged Assem Soliman Mossallam

Abstract:

The conversion of solar thruster model to an upper stage hydrogen rocket is considered. Solar thruster categorization limits its capabilities to low and moderate thrust system with high specific impulse. The current study proposes a different concept for such systems by increasing the thrust which enables using as an upper stage rocket and for future launching purposes. A computational model for the thruster is discussed for solar thruster subsystems. The first module depends on ray tracing technique to determine the intercepted solar power by the hydrogen combustion chamber. The cavity receiver is modeled using finite volume technique. The final module imports the heated hydrogen properties to the nozzle using quasi one dimensional simulation. The probability of shock waves formulation inside the nozzle is almost diminished as the outlet pressure in space environment tends to zero. The computational model relates the high thrust hydrogen rocket conversion to the design parameters and operating conditions of the thruster. Three different designs for solar thruster systems are discussed. The first design is a low thrust high specific impulse design that produces about 10 Newton of thrust .The second one output thrust is about 250 Newton and the third design produces about 1000 Newton.

Keywords: space propulsion, hydrogen rocket, thrust, specific impulse

Procedia PDF Downloads 164
2361 Numerical Investigation of Multiphase Flow in Pipelines

Authors: Gozel Judakova, Markus Bause

Abstract:

We present and analyze reliable numerical techniques for simulating complex flow and transport phenomena related to natural gas transportation in pipelines. Such kind of problems are of high interest in the field of petroleum and environmental engineering. Modeling and understanding natural gas flow and transformation processes during transportation is important for the sake of physical realism and the design and operation of pipeline systems. In our approach a two fluid flow model based on a system of coupled hyperbolic conservation laws is considered for describing natural gas flow undergoing hydratization. The accurate numerical approximation of two-phase gas flow remains subject of strong interest in the scientific community. Such hyperbolic problems are characterized by solutions with steep gradients or discontinuities, and their approximation by standard finite element techniques typically gives rise to spurious oscillations and numerical artefacts. Recently, stabilized and discontinuous Galerkin finite element techniques have attracted researchers’ interest. They are highly adapted to the hyperbolic nature of our two-phase flow model. In the presentation a streamline upwind Petrov-Galerkin approach and a discontinuous Galerkin finite element method for the numerical approximation of our flow model of two coupled systems of Euler equations are presented. Then the efficiency and reliability of stabilized continuous and discontinous finite element methods for the approximation is carefully analyzed and the potential of the either classes of numerical schemes is investigated. In particular, standard benchmark problems of two-phase flow like the shock tube problem are used for the comparative numerical study.

Keywords: discontinuous Galerkin method, Euler system, inviscid two-fluid model, streamline upwind Petrov-Galerkin method, twophase flow

Procedia PDF Downloads 327
2360 Evaluation and Compression of Different Language Transformer Models for Semantic Textual Similarity Binary Task Using Minority Language Resources

Authors: Ma. Gracia Corazon Cayanan, Kai Yuen Cheong, Li Sha

Abstract:

Training a language model for a minority language has been a challenging task. The lack of available corpora to train and fine-tune state-of-the-art language models is still a challenge in the area of Natural Language Processing (NLP). Moreover, the need for high computational resources and bulk data limit the attainment of this task. In this paper, we presented the following contributions: (1) we introduce and used a translation pair set of Tagalog and English (TL-EN) in pre-training a language model to a minority language resource; (2) we fine-tuned and evaluated top-ranking and pre-trained semantic textual similarity binary task (STSB) models, to both TL-EN and STS dataset pairs. (3) then, we reduced the size of the model to offset the need for high computational resources. Based on our results, the models that were pre-trained to translation pairs and STS pairs can perform well for STSB task. Also, having it reduced to a smaller dimension has no negative effect on the performance but rather has a notable increase on the similarity scores. Moreover, models that were pre-trained to a similar dataset have a tremendous effect on the model’s performance scores.

Keywords: semantic matching, semantic textual similarity binary task, low resource minority language, fine-tuning, dimension reduction, transformer models

Procedia PDF Downloads 209
2359 Conventional and Computational Investigation of the Synthesized Organotin(IV) Complexes Derived from o-Vanillin and 3-Nitro-o-Phenylenediamine

Authors: Harminder Kaur, Manpreet Kaur, Akanksha Kapila, Reenu

Abstract:

Schiff base with general formula H₂L was derived from condensation of o-vanillin and 3-nitro-o-phenylenediamine. This Schiff base was used for the synthesis of organotin(IV) complexes with general formula R₂SnL [R=Phenyl or n-octyl] using equimolar quantities. Elemental analysis UV-Vis, FTIR, and multinuclear spectroscopic techniques (¹H, ¹³C, and ¹¹⁹Sn) NMR were carried out for the characterization of the synthesized complexes. These complexes were coloured and soluble in polar solvents. Computational studies have been performed to obtain the details of the geometry and electronic structures of ligand as well as complexes. Geometry of the ligands and complexes have been optimized at the level of Density Functional Theory with B3LYP/6-311G (d,p) and B3LYP/MPW1PW91 respectively followed by vibrational frequency analysis using Gaussian 09. Observed ¹¹⁹Sn NMR chemical shifts of one of the synthesized complexes showed tetrahedral geometry around Tin atom which is also confirmed by DFT. HOMO-LUMO energy distribution was calculated. FTIR, ¹HNMR and ¹³CNMR spectra were also obtained theoretically using DFT. Further IRC calculations were employed to determine the transition state for the reaction and to get the theoretical information about the reaction pathway. Moreover, molecular docking studies can be explored to ensure the anticancer activity of the newly synthesized organotin(IV) complexes.

Keywords: DFT, molecular docking, organotin(IV) complexes, o-vanillin, 3-nitro-o-phenylenediamine

Procedia PDF Downloads 159
2358 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold

Authors: Morteza Malek Yarand, Hadi Saebi Monfared

Abstract:

This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.

Keywords: mechanical force gauge, mold, reshaped fruit, square watermelon

Procedia PDF Downloads 272
2357 Modeling Optimal Lipophilicity and Drug Performance in Ligand-Receptor Interactions: A Machine Learning Approach to Drug Discovery

Authors: Jay Ananth

Abstract:

The drug discovery process currently requires numerous years of clinical testing as well as money just for a single drug to earn FDA approval. For drugs that even make it this far in the process, there is a very slim chance of receiving FDA approval, resulting in detrimental hurdles to drug accessibility. To minimize these inefficiencies, numerous studies have implemented computational methods, although few computational investigations have focused on a crucial feature of drugs: lipophilicity. Lipophilicity is a physical attribute of a compound that measures its solubility in lipids and is a determinant of drug efficacy. This project leverages Artificial Intelligence to predict the impact of a drug’s lipophilicity on its performance by accounting for factors such as binding affinity and toxicity. The model predicted lipophilicity and binding affinity in the validation set with very high R² scores of 0.921 and 0.788, respectively, while also being applicable to a variety of target receptors. The results expressed a strong positive correlation between lipophilicity and both binding affinity and toxicity. The model helps in both drug development and discovery, providing every pharmaceutical company with recommended lipophilicity levels for drug candidates as well as a rapid assessment of early-stage drugs prior to any testing, eliminating significant amounts of time and resources currently restricting drug accessibility.

Keywords: drug discovery, lipophilicity, ligand-receptor interactions, machine learning, drug development

Procedia PDF Downloads 108
2356 Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking

Authors: Peter U. Eze, P. Udaya, Robin J. Evans

Abstract:

Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, p. The constant correlation p, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from p. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost.

Keywords: Constant Correlation, Medical Image, Spread Spectrum, Tamper Detection, Watermarking

Procedia PDF Downloads 192
2355 Effects of Front Porch and Loft on Indoor Ventilation in the Renewal of Beijing Courtyard

Authors: Zhongzhong Zeng, Zichen Liang

Abstract:

In recent years, Beijing courtyards have been facing the problem of renewal and renovation, and the residents are faced with the problems of small house areas, large household sizes, old and dangerous houses, etc. Among the many renovation methods, the authors note two more common practices of using the front porch to expand the floor area and adding a loft. Residents and architects, however, did not give the ventilation performance of the significant interior consideration before beginning the remodeling. The aim of this article is to explore the good or negative impacts of both front porch and loft structures on the manner of interior ventilation in the courtyard. Ventilation, in turn, is crucial to the indoor environmental quality of a home. The major method utilized in this study is the comparative analysis method, in which the authors create four alternative house models with or without a front porch and an attic as two variables and examine internal ventilation using the CFD(Computational Fluid Dynamics) technique. The authors compare the indoor ventilation of four different architectural models with or without front porches and lofts as two variables. The results obtained from the analysis of the sectional airflow and the plane 1.5m height cloud are the existence of the loft, to a certain extent, disrupts the airflow organization of the building and makes the rear wall high windows of the building less effective. Occupying the front porch to become the area of the house has no significant effect on ventilation, but try not to occupy the front porch and add the loft at the same time in the building renovation. The findings of this study led to the following recommendations: strive to preserve the courtyard building's original architectural design and make adjustments to only the inappropriate elements or constructions. The ventilation in the loft portion is inadequate, and the inhabitants typically use the loft as a living area. This may lead to the building relying more on air conditioning in the summer, which would raise energy demand. The front porch serves as a transition place as well as a source of shade, weather protection, and inside ventilation. In conclusion, the examination of interior environments in upcoming studies should concentrate on cross-disciplinary, multi-angle, and multi-level research topics.

Keywords: Beijing courtyard renewal, CFD, indoor environment, ventilation analysis

Procedia PDF Downloads 80
2354 Numerical Simulation of the Fractional Flow Reserve in the Coronary Artery with Serial Stenoses of Varying Configuration

Authors: Mariia Timofeeva, Andrew Ooi, Eric K. W. Poon, Peter Barlis

Abstract:

Atherosclerotic plaque build-up, commonly known as stenosis, limits blood flow and hence oxygen and nutrient supplies to the heart muscle. Thus, assessment of its severity is of great interest to health professionals. Numerical simulation of the fractional flow reserve (FFR) has proved to be well correlated with invasively measured FFR used for physiological assessment of the severity of coronary stenosis in arteries. Atherosclerosis may impact the diseased artery in several locations causing serial stenoses, which is a complicated subset of coronary artery disease that requires careful treatment planning. However, hemodynamic of the serial sequential stenoses in coronary arteries has not been extensively studied. The hemodynamics of the serial stenoses is complex because the stenoses in the series interact and affect the flow through each other. To address this, serial stenoses in a 3.4 mm left anterior descending (LAD) artery are examined in this study. Two diameter stenoses (DS) are considered, 30 and 50 percent of the reference diameter. Serial stenoses configurations are divided into three groups based on the order of the stenoses in the series, spacing between them, and deviation of the stenoses’ symmetry (eccentricity). A patient-specific pulsatile waveform is used in the simulations. Blood flow within the stenotic artery is assumed to be laminar, Newtonian, and incompressible. Results for the FFR are reported. Based on the simulation results, it can be deduced that the larger drop in pressure (smaller value of the FFR) is expected when the percentage of the second stenosis in the series is bigger. Varying the distance between the stenoses affects the location of the maximum drop in the pressure, while the minimal FFR in the artery remains unchanged. Eccentric serial stenoses are characterized by a noticeably larger decrease in pressure through the stenoses and by the development of the chaotic flow downstream of the stenoses. The largest drop in the pressure (about 4% difference compared to the axisymmetric case) is obtained for the serial stenoses, where both the stenoses are highly eccentric with the centerlines deflected to the different sides of the LAD. In conclusion, varying configuration of the sequential serial stenoses results in a different distribution of FFR through the LAD. Results presented in this study provide insight into the clinical assessment of the severity of the coronary serial stenoses, which is proved to depend on the relative position of the stenoses and the deviation of the stenoses’ symmetry.

Keywords: computational fluid dynamics, coronary artery, fractional flow reserve, serial stenoses

Procedia PDF Downloads 182
2353 Multiple Organ Manifestation in Neonatal Lupus Erythematous: Report of Two Cases

Authors: A. Lubis, R. Widayanti, Z. Hikmah, A. Endaryanto, A. Harsono, A. Harianto, R. Etika, D. K. Handayani, M. Sampurna

Abstract:

Neonatal lupus erythematous (NLE) is a rare disease marked by clinical characteristic and specific maternal autoantibody. Many cutaneous, cardiac, liver, and hematological manifestations could happen with affect of one organ or multiple. In this case, both babies were premature, low birth weight (LBW), small for gestational age (SGA) and born through caesarean section from a systemic lupus erythematous (SLE) mother. In the first case, we found a baby girl with dyspnea and grunting. Chest X ray showed respiratory distress syndrome (RDS) great I and echocardiography showed small atrial septal defect (ASD) and ventricular septal defect (VSD). She also developed anemia, thrombocytopenia, elevated C-reactive protein, hypoalbuminemia, increasing coagulation factors, hyperbilirubinemia, and positive blood culture of Klebsiella pneumonia. Anti-Ro/SSA and Anti-nRNP/sm were positive. Intravenous fluid, antibiotic, transfusion of blood, thrombocyte concentrate, and fresh frozen plasma were given. The second baby, male presented with necrotic tissue on the left ear and skin rashes, erythematous macula, athropic scarring, hyperpigmentation on all of his body with various size and facial haemorrhage. He also suffered from thrombocytopenia, mild elevated transaminase enzyme, hyperbilirubinemia, anti-Ro/SSA was positive. Intravenous fluid, methyprednisolone, intravenous immunoglobulin (IVIG), blood, and thrombocyte concentrate transfution were given. Two cases of neonatal lupus erythematous had been presented. Diagnosis based on clinical presentation and maternal auto antibody on neonate. Organ involvement in NLE can occur as single or multiple manifestations.

Keywords: neonatus lupus erythematous, maternal autoantibody, clinical characteristic, multiple organ manifestation

Procedia PDF Downloads 421
2352 Multi-Factor Optimization Method through Machine Learning in Building Envelope Design: Focusing on Perforated Metal Façade

Authors: Jinwooung Kim, Jae-Hwan Jung, Seong-Jun Kim, Sung-Ah Kim

Abstract:

Because the building envelope has a significant impact on the operation and maintenance stage of the building, designing the facade considering the performance can improve the performance of the building and lower the maintenance cost of the building. In general, however, optimizing two or more performance factors confronts the limits of time and computational tools. The optimization phase typically repeats infinitely until a series of processes that generate alternatives and analyze the generated alternatives achieve the desired performance. In particular, as complex geometry or precision increases, computational resources and time are prohibitive to find the required performance, so an optimization methodology is needed to deal with this. Instead of directly analyzing all the alternatives in the optimization process, applying experimental techniques (heuristic method) learned through experimentation and experience can reduce resource waste. This study proposes and verifies a method to optimize the double envelope of a building composed of a perforated panel using machine learning to the design geometry and quantitative performance. The proposed method is to achieve the required performance with fewer resources by supplementing the existing method which cannot calculate the complex shape of the perforated panel.

Keywords: building envelope, machine learning, perforated metal, multi-factor optimization, façade

Procedia PDF Downloads 222
2351 Application of GA Optimization in Analysis of Variable Stiffness Composites

Authors: Nasim Fallahi, Erasmo Carrera, Alfonso Pagani

Abstract:

Variable angle tow describes the fibres which are curvilinearly steered in a composite lamina. Significantly, stiffness tailoring freedom of VAT composite laminate can be enlarged and enabled. Composite structures with curvilinear fibres have been shown to improve the buckling load carrying capability in contrast with the straight laminate composites. However, the optimal design and analysis of VAT are faced with high computational efforts due to the increasing number of variables. In this article, an efficient optimum solution has been used in combination with 1D Carrera’s Unified Formulation (CUF) to investigate the optimum fibre orientation angles for buckling analysis. The particular emphasis is on the LE-based CUF models, which provide a Lagrange Expansions to address a layerwise description of the problem unknowns. The first critical buckling load has been considered under simply supported boundary conditions. Special attention is lead to the sensitivity of buckling load corresponding to the fibre orientation angle in comparison with the results which obtain through the Genetic Algorithm (GA) optimization frame and then Artificial Neural Network (ANN) is applied to investigate the accuracy of the optimized model. As a result, numerical CUF approach with an optimal solution demonstrates the robustness and computational efficiency of proposed optimum methodology.

Keywords: beam structures, layerwise, optimization, variable stiffness

Procedia PDF Downloads 141
2350 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives

Authors: Roberto Cabezas H

Abstract:

The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.

Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance

Procedia PDF Downloads 142
2349 Nelder-Mead Parametric Optimization of Elastic Metamaterials with Artificial Neural Network Surrogate Model

Authors: Jiaqi Dong, Qing-Hua Qin, Yi Xiao

Abstract:

Some of the most fundamental challenges of elastic metamaterials (EMMs) optimization can be attributed to the high consumption of computational power resulted from finite element analysis (FEA) simulations that render the optimization process inefficient. Furthermore, due to the inherent mesh dependence of FEA, minuscule geometry features, which often emerge during the later stages of optimization, induce very fine elements, resulting in enormously high time consumption, particularly when repetitive solutions are needed for computing the objective function. In this study, a surrogate modelling algorithm is developed to reduce computational time in structural optimization of EMMs. The surrogate model is constructed based on a multilayer feedforward artificial neural network (ANN) architecture, trained with prepopulated eigenfrequency data prepopulated from FEA simulation and optimized through regime selection with genetic algorithm (GA) to improve its accuracy in predicting the location and width of the primary elastic band gap. With the optimized ANN surrogate at the core, a Nelder-Mead (NM) algorithm is established and its performance inspected in comparison to the FEA solution. The ANNNM model shows remarkable accuracy in predicting the band gap width and a reduction of time consumption by 47%.

Keywords: artificial neural network, machine learning, mechanical metamaterials, Nelder-Mead optimization

Procedia PDF Downloads 126
2348 Cosmic Dust as Dark Matter

Authors: Thomas Prevenslik

Abstract:

Weakly Interacting Massive Particle (WIMP) experiments suggesting dark matter does not exist are consistent with the argument that the long-standing galaxy rotation problem may be resolved without the need for dark matter if the redshift measurements giving the higher than expected galaxy velocities are corrected for the redshift in cosmic dust. Because of the ubiquity of cosmic dust, all velocity measurements in astronomy based on redshift are most likely overstated, e.g., an accelerating Universe expansion need not exist if data showing supernovae brighter than expected based on the redshift/distance relation is corrected for the redshift in dust. Extensions of redshift corrections for cosmic dust to other historical astronomical observations are briefly discussed.

Keywords: alternative theories, cosmic dust redshift, doppler effect, quantum mechanics, quantum electrodynamics

Procedia PDF Downloads 296
2347 Iterative Dynamic Programming for 4D Flight Trajectory Optimization

Authors: Kawser Ahmed, K. Bousson, Milca F. Coelho

Abstract:

4D flight trajectory optimization is one of the key ingredients to improve flight efficiency and to enhance the air traffic capacity in the current air traffic management (ATM). The present paper explores the iterative dynamic programming (IDP) as a potential numerical optimization method for 4D flight trajectory optimization. IDP is an iterative version of the Dynamic programming (DP) method. Due to the numerical framework, DP is very suitable to deal with nonlinear discrete dynamic systems. The 4D waypoint representation of the flight trajectory is similar to the discretization by a grid system; thus DP is a natural method to deal with the 4D flight trajectory optimization. However, the computational time and space complexity demanded by the DP is enormous due to the immense number of grid points required to find the optimum, which prevents the use of the DP in many practical high dimension problems. On the other hand, the IDP has shown potentials to deal successfully with high dimension optimal control problems even with a few numbers of grid points at each stage, which reduces the computational effort over the traditional DP approach. Although the IDP has been applied successfully in chemical engineering problems, IDP is yet to be validated in 4D flight trajectory optimization problems. In this paper, the IDP has been successfully used to generate minimum length 4D optimal trajectory avoiding any obstacle in its path, such as a no-fly zone or residential areas when flying in low altitude to reduce noise pollution.

Keywords: 4D waypoint navigation, iterative dynamic programming, obstacle avoidance, trajectory optimization

Procedia PDF Downloads 160