Search results for: computational electromagnetic
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2492

Search results for: computational electromagnetic

1832 Development of DEMO-FNS Hybrid Facility and Its Integration in Russian Nuclear Fuel Cycle

Authors: Yury S. Shpanskiy, Boris V. Kuteev

Abstract:

Development of a fusion-fission hybrid facility based on superconducting conventional tokamak DEMO-FNS runs in Russia since 2013. The main design goal is to reach the technical feasibility and outline prospects of industrial hybrid technologies providing the production of neutrons, fuel nuclides, tritium, high-temperature heat, electricity and subcritical transmutation in Fusion-Fission Hybrid Systems. The facility should operate in a steady-state mode at the fusion power of 40 MW and fission reactions of 400 MW. Major tokamak parameters are the following: major radius R=3.2 m, minor radius a=1.0 m, elongation 2.1, triangularity 0.5. The design provides the neutron wall loading of ~0.2 MW/m², the lifetime neutron fluence of ~2 MWa/m², with the surface area of the active cores and tritium breeding blanket ~100 m². Core plasma modelling showed that the neutron yield ~10¹⁹ n/s is maximal if the tritium/deuterium density ratio is 1.5-2.3. The design of the electromagnetic system (EMS) defined its basic parameters, accounting for the coils strength and stability, and identified the most problematic nodes in the toroidal field coils and the central solenoid. The EMS generates toroidal, poloidal and correcting magnetic fields necessary for the plasma shaping and confinement inside the vacuum vessel. EMC consists of eighteen superconducting toroidal field coils, eight poloidal field coils, five sections of a central solenoid, correction coils, in-vessel coils for vertical plasma control. Supporting structures, the thermal shield, and the cryostat maintain its operation. EMS operates with the pulse duration of up to 5000 hours at the plasma current up to 5 MA. The vacuum vessel (VV) is an all-welded two-layer toroidal shell placed inside the EMS. The free space between the vessel shells is filled with water and boron steel plates, which form the neutron protection of the EMS. The VV-volume is 265 m³, its mass with manifolds is 1800 tons. The nuclear blanket of DEMO-FNS facility was designed to provide functions of minor actinides transmutation, tritium production and enrichment of spent nuclear fuel. The vertical overloading of the subcritical active cores with MA was chosen as prospective. Analysis of the device neutronics and the hybrid blanket thermal-hydraulic characteristics has been performed for the system with functions covering transmutation of minor actinides, production of tritium and enrichment of spent nuclear fuel. A study of FNS facilities role in the Russian closed nuclear fuel cycle was performed. It showed that during ~100 years of operation three FNS facilities with fission power of 3 GW controlled by fusion neutron source with power of 40 MW can burn 98 tons of minor actinides and 198 tons of Pu-239 can be produced for startup loading of 20 fast reactors. Instead of Pu-239, up to 25 kg of tritium per year may be produced for startup of fusion reactors using blocks with lithium orthosilicate instead of fissile breeder blankets.

Keywords: fusion-fission hybrid system, conventional tokamak, superconducting electromagnetic system, two-layer vacuum vessel, subcritical active cores, nuclear fuel cycle

Procedia PDF Downloads 147
1831 Magnet Position Variation of the Electromagnetic Actuation System in a Torsional Scanner

Authors: Loke Kean Koay, Mani Maran Ratnam

Abstract:

A mechanically-resonant torsional spring scanner was developed in a recent study. Various methods were developed to improve the angular displacement of the scanner while maintaining the scanner frequency. However, the effects of rotor magnet radial position on scanner characteristics were not well investigated. In this study, the relationships between the magnet position and the scanner characteristics such as natural frequency, angular displacement and stress level were studied. A finite element model was created and an average deviation of 3.18% was found between the simulation and experimental results, qualifying the simulation results as a guide for further investigations. Three magnet positions on the transverse oscillating suspended plate were investigated by finite element analysis (FEA) and one of the positions were selected as the design position. The magnet position with the longest distance from the twist axis of the mirror was selected since it attains minimum stress level while exceeding the minimum critical flicker frequency and delivering the targeted angular displacement to the scanner.

Keywords: torsional scanner, design optimization, computer-aided design, magnet position variation

Procedia PDF Downloads 366
1830 Sensitivity of the Estimated Output Energy of the Induction Motor to both the Asymmetry Supply Voltage and the Machine Parameters

Authors: Eyhab El-Kharashi, Maher El-Dessouki

Abstract:

The paper is dedicated to precise assessment of the induction motor output energy during the unbalanced operation. Since many years ago and until now the voltage complex unbalance factor (CVUF) is used only to assess the output energy of the induction motor while this output energy for asymmetry supply voltage does not depend on the value of unbalanced voltage only but also on the machine parameters. The paper illustrates the variation of the two unbalance factors, complex voltage unbalance factor (CVUF) and impedance unbalance factor (IUF), with positive sequence voltage component, reveals that degree and manner of unbalance in supply voltage. From this point of view the paper delineates the current unbalance factor (CUF) to exactly reflect the output energy during unbalanced operation. The paper proceeds to illustrate the importance of using this factor in the multi-machine system to precise prediction of the output energy during the unbalanced operation. The use of the proposed unbalance factor (CUF) avoids the accumulation of the error due to more than one machine in the system which is expected if only the complex voltage unbalance factor (CVUF) is used.

Keywords: induction motor, electromagnetic torque, voltage unbalance, energy conversion

Procedia PDF Downloads 557
1829 Aerodynamic Design of a Light Long Range Blended Wing Body Unmanned Vehicle

Authors: Halison da Silva Pereira, Ciro Sobrinho Campolina Martins, Vitor Mainenti Leal Lopes

Abstract:

Long range performance is a goal for aircraft configuration optimization. Blended Wing Body (BWB) is presented in many works of literature as the most aerodynamically efficient design for a fixed-wing aircraft. Because of its high weight to thrust ratio, BWB is the ideal configuration for many Unmanned Aerial Vehicle (UAV) missions on geomatics applications. In this work, a BWB aerodynamic design for typical light geomatics payload is presented. Aerodynamic non-dimensional coefficients are predicted using low Reynolds number computational techniques (3D Panel Method) and wing parameters like aspect ratio, taper ratio, wing twist and sweep are optimized for high cruise performance and flight quality. The methodology of this work is a summary of tailless aircraft wing design and its application, with appropriate computational schemes, to light UAV subjected to low Reynolds number flows leads to conclusions like the higher performance and flight quality of thicker airfoils in the airframe body and the benefits of using aerodynamic twist rather than just geometric.

Keywords: blended wing body, low Reynolds number, panel method, UAV

Procedia PDF Downloads 586
1828 Computational Aerodynamic Shape Optimisation Using a Concept of Control Nodes and Modified Cuckoo Search

Authors: D. S. Naumann, B. J. Evans, O. Hassan

Abstract:

This paper outlines the development of an automated aerodynamic optimisation algorithm using a novel method of parameterising a computational mesh by employing user–defined control nodes. The shape boundary movement is coupled to the movement of the novel concept of the control nodes via a quasi-1D-linear deformation. Additionally, a second order smoothing step has been integrated to act on the boundary during the mesh movement based on the change in its second derivative. This allows for both linear and non-linear shape transformations dependent on the preference of the user. The domain mesh movement is then coupled to the shape boundary movement via a Delaunay graph mapping. A Modified Cuckoo Search (MCS) algorithm is used for optimisation within the prescribed design space defined by the allowed range of control node displacement. A finite volume compressible NavierStokes solver is used for aerodynamic modelling to predict aerodynamic design fitness. The resulting coupled algorithm is applied to a range of test cases in two dimensions including the design of a subsonic, transonic and supersonic intake and the optimisation approach is compared with more conventional optimisation strategies. Ultimately, the algorithm is tested on a three dimensional wing optimisation case.

Keywords: mesh movement, aerodynamic shape optimization, cuckoo search, shape parameterisation

Procedia PDF Downloads 337
1827 Information Theoretic Approach for Beamforming in Wireless Communications

Authors: Syed Khurram Mahmud, Athar Naveed, Shoaib Arif

Abstract:

Beamforming is a signal processing technique extensively utilized in wireless communications and radars for desired signal intensification and interference signal minimization through spatial selectivity. In this paper, we present a method for calculation of optimal weight vectors for smart antenna array, to achieve a directive pattern during transmission and selective reception in interference prone environment. In proposed scheme, Mutual Information (MI) extrema are evaluated through an energy constrained objective function, which is based on a-priori information of interference source and desired array factor. Signal to Interference plus Noise Ratio (SINR) performance is evaluated for both transmission and reception. In our scheme, MI is presented as an index to identify trade-off between information gain, SINR, illumination time and spatial selectivity in an energy constrained optimization problem. The employed method yields lesser computational complexity, which is presented through comparative analysis with conventional methods in vogue. MI based beamforming offers enhancement of signal integrity in degraded environment while reducing computational intricacy and correlating key performance indicators.

Keywords: beamforming, interference, mutual information, wireless communications

Procedia PDF Downloads 280
1826 Computational Fluid Dynamics Simulation to Study the Effect of Ambient Temperature on the Ventilation in a Metro Tunnel

Authors: Yousef Almutairi, Yajue Wu

Abstract:

Various large-scale trends have characterized the current century thus far, including increasing shifts towards urbanization and greater movement. It is predicted that there will be 9.3 billion people on Earth in 2050 and that over two-thirds of this population will be city dwellers. Moreover, in larger cities worldwide, mass transportation systems, including underground systems, have grown to account for the majority of travel in those settings. Underground networks are vulnerable to fires, however, endangering travellers’ safety, with various examples of fire outbreaks in this setting. This study aims to increase knowledge of the impacts of extreme climatic conditions on fires, including the role of the high ambient temperatures experienced in Middle Eastern countries and specifically in Saudi Arabia. This is an element that is not always included when assessments of fire safety are made (considering visibility, temperatures, and flows of smoke). This paper focuses on a tunnel within Riyadh’s underground system as a case study and includes simulations based on computational fluid dynamics using ANSYS Fluent, which investigates the impact of various ventilation systems while identifying smoke density, speed, pressure and temperatures within this tunnel.

Keywords: fire, subway tunnel, CFD, mechanical ventilation, smoke, temperature, harsh weather

Procedia PDF Downloads 132
1825 Computational Team Dynamics in Student New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

Teamwork is an extremely effective pedagogical tool in engineering education. New Product Development (NPD) has been an effective strategy of companies to streamline and bring innovative products and solutions to customers. Thus, Engineering curriculum in many schools, some collaboratively with business schools, have brought NPD into the curriculum at the graduate level. Teamwork is invariably used during instruction, where students work in teams to come up with new products and solutions. There is a significant emphasis of grade on the semester long teamwork for it to be taken seriously by students. As the students work in teams and go through this process to develop the new product prototypes, their effectiveness and learning to a great extent depends on how they function as a team and go through the creative process, come together, and work towards the common goal. A core attribute of a successful NPD team is their creativity and innovation. The team needs to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas resulting in a POC (proof-of-concept) implementation or a prototype of the product. The simultaneous requirement of teams to be creative and at the same time also converge and work together imposes different types of tensions in their team interactions. These ideational tensions / conflicts and sometimes relational tensions / conflicts are inevitable. Effective teams will have to deal with the Team dynamics and manage it to be resilient enough and yet be creative. This research paper provides a computational analysis of the teams’ communication that is reflective of the team dynamics, and through a superimposition of latent semantic analysis with social network analysis, provides a computational methodology of arriving at patterns of visual interaction. These team interaction patterns have clear correlations to the team dynamics and provide insights into the functioning and thus the effectiveness of the teams. 23 student NPD teams over 2 years of a course on Managing NPD that has a blend of engineering and business school students is considered, and the results are presented. It is also correlated with the teams’ detailed and tailored individual and group feedback and self-reflection and evaluation questionnaire.

Keywords: team dynamics, social network analysis, team interaction patterns, new product development teamwork, NPD teams

Procedia PDF Downloads 116
1824 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes

Authors: Igor A. Krichtafovitch

Abstract:

The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.

Keywords: supercomputer, biological evolution, Darwinism, speciation

Procedia PDF Downloads 164
1823 Simulation of Reflection Loss for Carbon and Nickel-Carbon Thin Films

Authors: M. Emami, R. Tarighi, R. Goodarzi

Abstract:

Maximal radar wave absorbing cannot be achieved by shaping alone. We have to focus on the parameters of absorbing materials such as permittivity, permeability, and thickness so that best absorbing according to our necessity can happen. The real and imaginary parts of the relative complex permittivity (εr' and εr") and permeability (µr' and µr") were obtained by simulation. The microwave absorbing property of carbon and Ni(C) is simulated in this study by MATLAB software; the simulation was in the frequency range between 2 to 12 GHz for carbon black (C), and carbon coated nickel (Ni(C)) with different thicknesses. In fact, we draw reflection loss (RL) for C and Ni-C via frequency. We have compared their absorption for 3-mm thickness and predicted for other thicknesses by using of electromagnetic wave transmission theory. The results showed that reflection loss position changes in low frequency with increasing of thickness. We found out that, in all cases, using nanocomposites as absorbance cannot get better results relative to pure nanoparticles. The frequency where absorption is maximum can determine the best choice between nanocomposites and pure nanoparticles. Also, we could find an optimal thickness for long wavelength absorbing in order to utilize them in protecting shields and covering.

Keywords: absorbing, carbon, carbon nickel, frequency, thicknesses

Procedia PDF Downloads 186
1822 Generalized Mathematical Description and Simulation of Grid-Tied Thyristor Converters

Authors: V. S. Klimash, Ye Min Thu

Abstract:

Thyristor rectifiers, inverters grid-tied, and AC voltage regulators are widely used in industry, and on electrified transport, they have a lot in common both in the power circuit and in the control system. They have a common mathematical structure and switching processes. At the same time, the rectifier, but the inverter units and thyristor regulators of alternating voltage are considered separately both theoretically and practically. They are written about in different books as completely different devices. The aim of this work is to combine them into one class based on the unity of the equations describing electromagnetic processes, and then, to show this unity on the mathematical model and experimental setup. Based on research from mathematics to the product, a conclusion is made about the methodology for the rapid conduct of research and experimental design work, preparation for production and serial production of converters with a unified bundle. In recent years, there has been a transition from thyristor circuits and transistor in modular design. Showing the example of thyristor rectifiers and AC voltage regulators, we can conclude that there is a unity of mathematical structures and grid-tied thyristor converters.

Keywords: direct current, alternating current, rectifier, AC voltage regulator, generalized mathematical model

Procedia PDF Downloads 250
1821 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition

Authors: M. Beusink, E. W. C. Coenen

Abstract:

The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.

Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures

Procedia PDF Downloads 233
1820 Radial Fuel Injection Computational Fluid Dynamics Model for a Compression Ignition Two-Stroke Opposed Piston Engine

Authors: Tytus Tulwin, Rafal Sochaczewski, Ksenia Siadkowska

Abstract:

Designing a new engine requires a large number of different cases to be considered. Especially different injector parameters and combustion chamber geometries. This is essential when developing an engine with unconventional build – compression ignition, two-stroke operating with direct side injection. Computational Fluid Dynamics modelling allows to test those different conditions and seek for the best conditions with correct combustion. This research presents the combustion results for different injector and combustion chamber cases. The shape of combustion chamber is different than for conventional engines as it requires side injection. This completely changes the optimal shape for the given condition compared to standard automotive heart shaped combustion chamber. Because the injection is not symmetrical there is a strong influence of cylinder swirl and piston motion on the injected fuel stream. The results present the fuel injection phenomena allowing to predict the right injection parameters for a maximum combustion efficiency and minimum piston heat loads. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: CFD, combustion, injection, opposed piston

Procedia PDF Downloads 273
1819 Modeling the Human Harbor: An Equity Project in New York City, New York USA

Authors: Lauren B. Birney

Abstract:

The envisioned long-term outcome of this three-year research, and implementation plan is for 1) teachers and students to design and build their own computational models of real-world environmental-human health phenomena occurring within the context of the “Human Harbor” and 2) project researchers to evaluate the degree to which these integrated Computer Science (CS) education experiences in New York City (NYC) public school classrooms (PreK-12) impact students’ computational-technical skill development, job readiness, career motivations, and measurable abilities to understand, articulate, and solve the underlying phenomena at the center of their models. This effort builds on the partnership’s successes over the past eight years in developing a benchmark Model of restoration-based Science, Technology, Engineering, and Math (STEM) education for urban public schools and achieving relatively broad-based implementation in the nation’s largest public school system. The Billion Oyster Project Curriculum and Community Enterprise for Restoration Science (BOP-CCERS STEM + Computing) curriculum, teacher professional developments, and community engagement programs have reached more than 200 educators and 11,000 students at 124 schools, with 84 waterfront locations and Out of School of Time (OST) programs. The BOP-CCERS Partnership is poised to develop a more refined focus on integrating computer science across the STEM domains; teaching industry-aligned computational methods and tools; and explicitly preparing students from the city’s most under-resourced and underrepresented communities for upwardly mobile careers in NYC’s ever-expanding “digital economy,” in which jobs require computational thinking and an increasing percentage require discreet computer science technical skills. Project Objectives include the following: 1. Computational Thinking (CT) Integration: Integrate computational thinking core practices across existing middle/high school BOP-CCERS STEM curriculum as a means of scaffolding toward long term computer science and computational modeling outcomes. 2. Data Science and Data Analytics: Enabling Researchers to perform interviews with Teachers, students, community members, partners, stakeholders, and Science, Technology, Engineering, and Mathematics (STEM) industry Professionals. Collaborative analysis and data collection were also performed. As a centerpiece, the BOP-CCERS partnership will expand to include a dedicated computer science education partner. New York City Department of Education (NYCDOE), Computer Science for All (CS4ALL) NYC will serve as the dedicated Computer Science (CS) lead, advising the consortium on integration and curriculum development, working in tandem. The BOP-CCERS Model™ also validates that with appropriate application of technical infrastructure, intensive teacher professional developments, and curricular scaffolding, socially connected science learning can be mainstreamed in the nation’s largest urban public school system. This is evidenced and substantiated in the initial phases of BOP-CCERS™. The BOP-CCERS™ student curriculum and teacher professional development have been implemented in approximately 24% of NYC public middle schools, reaching more than 250 educators and 11,000 students directly. BOP-CCERS™ is a fully scalable and transferable educational model, adaptable to all American school districts. In all settings of the proposed Phase IV initiative, the primary beneficiary group will be underrepresented NYC public school students who live in high-poverty neighborhoods and are traditionally underrepresented in the STEM fields, including African Americans, Latinos, English language learners, and children from economically disadvantaged households. In particular, BOP-CCERS Phase IV will explicitly prepare underrepresented students for skilled positions within New York City’s expanding digital economy, computer science, computational information systems, and innovative technology sectors.

Keywords: computer science, data science, equity, diversity and inclusion, STEM education

Procedia PDF Downloads 58
1818 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output

Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin

Abstract:

With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.

Keywords: channel estimation, LMMSE, LS, MIMO, MMSE

Procedia PDF Downloads 191
1817 Multiscale Process Modeling of Ceramic Matrix Composites

Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya

Abstract:

Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.

Keywords: digital engineering, finite elements, manufacturing, molecular dynamics

Procedia PDF Downloads 98
1816 Creating and Questioning Research-Oriented Digital Outputs to Manuscript Metadata: A Case-Based Methodological Investigation

Authors: Diandra Cristache

Abstract:

The transition of traditional manuscript studies into the digital framework closely affects the methodological premises upon which manuscript descriptions are modeled, created, and questioned for the purpose of research. This paper intends to explore the issue by presenting a methodological investigation into the process of modeling, creating, and questioning manuscript metadata. The investigation is founded on a close observation of the Polonsky Greek Manuscripts Project, a collaboration between the Universities of Cambridge and Heidelberg. More than just providing a realistic ground for methodological exploration, along with a complete metadata set for computational demonstration, the case study also contributes to a broader purpose: outlining general methodological principles for making the most out of manuscript metadata by means of research-oriented digital outputs. The analysis mainly focuses on the scholarly approach to manuscript descriptions, in the specific instance where the act of metadata recording does not have a programmatic research purpose. Close attention is paid to the encounter of 'traditional' practices in manuscript studies with the formal constraints of the digital framework: does the shift in practices (especially from the straight narrative of free writing towards the hierarchical constraints of the TEI encoding model) impact the structure of metadata and its capability to respond specific research questions? It is argued that flexible structure of TEI and traditional approaches to manuscript description lead to a proliferation of markup: does an 'encyclopedic' descriptive approach ensure the epistemological relevance of the digital outputs to metadata? To provide further insight on the computational approach to manuscript metadata, the metadata of the Polonsky project are processed with techniques of distant reading and data networking, thus resulting in a new group of digital outputs (relational graphs, geographic maps). The computational process and the digital outputs are thoroughly illustrated and discussed. Eventually, a retrospective analysis evaluates how the digital outputs respond to the scientific expectations of research, and the other way round, how the requirements of research questions feed back into the creation and enrichment of metadata in an iterative loop.

Keywords: digital manuscript studies, digital outputs to manuscripts metadata, metadata interoperability, methodological issues

Procedia PDF Downloads 140
1815 Computational Fluid Dynamics Simulation Study of Flow near Moving Wall of Various Surface Types Using Moving Mesh Method

Authors: Khizir Mohd Ismail, Yu Jun Lim, Tshun Howe Yong

Abstract:

The study of flow behavior in an enclosed volume using Computational Fluid Dynamics (CFD) has been around for decades. However, due to the knowledge limitation of adaptive grid methods, the flow in an enclosed volume near the moving wall using CFD is less explored. A CFD simulation of flow in an enclosed volume near a moving wall was demonstrated and studied by introducing a moving mesh method and was modeled with Unsteady Reynolds-Averaged Navier-Stokes (URANS) approach. A static enclosed volume with controlled opening size in the bottom was positioned against a moving, translational wall with sliding mesh features. Controlled variables such as smoothed, crevices and corrugated wall characteristics, the distance between the enclosed volume to the wall and the moving wall speed against the enclosed chamber were varied to understand how the flow behaves and reacts in between these two geometries. These model simulations were validated against experimental results and provided result confidence when the simulation had shown good agreement with the experimental data. This study had provided better insight into the flow behaving in an enclosed volume when various wall types in motion were introduced within the various distance between each other and create a potential opportunity of application which involves adaptive grid methods in CFD.

Keywords: moving wall, adaptive grid methods, CFD, moving mesh method

Procedia PDF Downloads 147
1814 Reflectance Imaging Spectroscopy Data (Hyperspectral) for Mineral Mapping in the Orientale Basin Region on the Moon Surface

Authors: V. Sivakumar, R. Neelakantan

Abstract:

Mineral mapping on the Moon surface provides the clue to understand the origin, evolution, stratigraphy and geological history of the Moon. Recently, reflectance imaging spectroscopy plays a significant role in identifying minerals on the planetary surface in the Visible to NIR region of the electromagnetic spectrum. The Moon Mineralogy Mapper (M3) onboard Chandrayaan-1 provides unprecedented spectral data of lunar surface to study about the Moon surface. Here we used the M3 sensor data (hyperspectral imaging spectroscopy) for analysing mineralogy of Orientale basin region on the Moon surface. Reflectance spectrums were sampled from different locations of the basin and continuum was removed using ENvironment for Visualizing Images (ENVI) software. Reflectance spectra of unknown mineral composition were compared with known Reflectance Experiment Laboratory (RELAB) spectra for discriminating mineralogy. Minerals like olivine, Low-Ca Pyroxene (LCP), High-Ca Pyroxene (HCP) and plagioclase were identified. In addition to these minerals, an unusual type of spectral signature was identified, which indicates the probable Fe-Mg-spinel lithology in the basin region.

Keywords: chandryaan-1, moon mineralogy mapper, mineral, mare orientale, moon

Procedia PDF Downloads 396
1813 The Effects of Different Parameters of Wood Floating Debris on Scour Rate Around Bridge Piers

Authors: Muhanad Al-Jubouri

Abstract:

A local scour is the most important of the several scours impacting bridge performance and security. Even though scour is widespread in bridges, especially during flood seasons, the experimental tests could not be applied to many standard highway bridges. A computational fluid dynamics numerical model was used to solve the problem of calculating local scouring and deposition for non-cohesive silt and clear water conditions near single and double cylindrical piers with the effect of floating debris. When FLOW-3D software is employed with the Rang turbulence model, the Nilsson bed-load transfer equation and fine mesh size are considered. The numerical findings of single cylindrical piers correspond pretty well with the physical model's results. Furthermore, after parameter effectiveness investigates the range of outcomes based on predicted user inputs such as the bed-load equation, mesh cell size, and turbulence model, the final numerical predictions are compared to experimental data. When the findings are compared, the error rate for the deepest point of the scour is equivalent to 3.8% for the single pier example.

Keywords: local scouring, non-cohesive, clear water, computational fluid dynamics, turbulence model, bed-load equation, debris

Procedia PDF Downloads 69
1812 Numerical and Experimental Analysis of Temperature Distribution and Electric Field in a Natural Rubber Glove during Microwave Heating

Authors: U. Narumitbowonkul, P. Keangin, P. Rattanadecho

Abstract:

Both numerical and experimental investigation of the temperature distribution and electric field in a natural rubber glove (NRG) during microwave heating are studied. A three-dimensional model of NRG and microwave oven are considered in this work. The influences of position, heating time and rotation angle of NRG on temperature distribution and electric field are presented in details. The coupled equations of electromagnetic wave propagation and heat transfer are solved using the finite element method (FEM). The numerical model is validated with an experimental study at a frequency of 2.45 GHz. The results show that the numerical results closely match the experimental results. Furthermore, it is found that the temperature distribution and electric field increases with increasing heating time. The hot spot zone appears in NRG at the tip of middle finger while the maximum temperature occurs in case of rotation angle of NRG = 60 degree. This investigation provides the essential aspects for a fundamental understanding of heat transport of NRG using microwave energy in industry.

Keywords: electric field, finite element method, microwave energy, natural rubber glove

Procedia PDF Downloads 263
1811 Computational System for the Monitoring Ecosystem of the Endangered White Fish (Chirostoma estor estor) in the Patzcuaro Lake, Mexico

Authors: Cesar Augusto Hoil Rosas, José Luis Vázquez Burgos, José Juan Carbajal Hernandez

Abstract:

White fish (Chirostoma estor estor) is an endemic species that habits in the Patzcuaro Lake, located in Michoacan, Mexico; being an important source of gastronomic and cultural wealth of the area. Actually, it have undergone an immense depopulation of individuals, due to the high fishing, contamination and eutrophication of the lake water, resulting in the possible extinction of this important species. This work proposes a new computational model for monitoring and assessment of critical environmental parameters of the white fish ecosystem. According to an Analytical Hierarchy Process, a mathematical model is built assigning weights to each environmental parameter depending on their water quality importance on the ecosystem. Then, a development of an advanced system for the monitoring, analysis and control of water quality is built using the virtual environment of LabVIEW. As results, we have obtained a global score that indicates the condition level of the water quality in the Chirostoma estor ecosystem (excellent, good, regular and poor), allowing to provide an effective decision making about the environmental parameters that affect the proper culture of the white fish such as temperature, pH and dissolved oxygen. In situ evaluations show regular conditions for a success reproduction and growth rates of this species where the water quality tends to have regular levels. This system emerges as a suitable tool for the water management, where future laws for white fish fishery regulations will result in the reduction of the mortality rate in the early stages of development of the species, which represent the most critical phase. This can guarantees better population sizes than those currently obtained in the aquiculture crop. The main benefit will be seen as a contribution to maintain the cultural and gastronomic wealth of the area and for its inhabitants, since white fish is an important food and economical income of the region, but the species is endangered.

Keywords: Chirostoma estor estor, computational system, lab view, white fish

Procedia PDF Downloads 325
1810 High Thrust Upper Stage Solar Hydrogen Rocket Design

Authors: Maged Assem Soliman Mossallam

Abstract:

The conversion of solar thruster model to an upper stage hydrogen rocket is considered. Solar thruster categorization limits its capabilities to low and moderate thrust system with high specific impulse. The current study proposes a different concept for such systems by increasing the thrust which enables using as an upper stage rocket and for future launching purposes. A computational model for the thruster is discussed for solar thruster subsystems. The first module depends on ray tracing technique to determine the intercepted solar power by the hydrogen combustion chamber. The cavity receiver is modeled using finite volume technique. The final module imports the heated hydrogen properties to the nozzle using quasi one dimensional simulation. The probability of shock waves formulation inside the nozzle is almost diminished as the outlet pressure in space environment tends to zero. The computational model relates the high thrust hydrogen rocket conversion to the design parameters and operating conditions of the thruster. Three different designs for solar thruster systems are discussed. The first design is a low thrust high specific impulse design that produces about 10 Newton of thrust .The second one output thrust is about 250 Newton and the third design produces about 1000 Newton.

Keywords: space propulsion, hydrogen rocket, thrust, specific impulse

Procedia PDF Downloads 166
1809 Evaluation and Compression of Different Language Transformer Models for Semantic Textual Similarity Binary Task Using Minority Language Resources

Authors: Ma. Gracia Corazon Cayanan, Kai Yuen Cheong, Li Sha

Abstract:

Training a language model for a minority language has been a challenging task. The lack of available corpora to train and fine-tune state-of-the-art language models is still a challenge in the area of Natural Language Processing (NLP). Moreover, the need for high computational resources and bulk data limit the attainment of this task. In this paper, we presented the following contributions: (1) we introduce and used a translation pair set of Tagalog and English (TL-EN) in pre-training a language model to a minority language resource; (2) we fine-tuned and evaluated top-ranking and pre-trained semantic textual similarity binary task (STSB) models, to both TL-EN and STS dataset pairs. (3) then, we reduced the size of the model to offset the need for high computational resources. Based on our results, the models that were pre-trained to translation pairs and STS pairs can perform well for STSB task. Also, having it reduced to a smaller dimension has no negative effect on the performance but rather has a notable increase on the similarity scores. Moreover, models that were pre-trained to a similar dataset have a tremendous effect on the model’s performance scores.

Keywords: semantic matching, semantic textual similarity binary task, low resource minority language, fine-tuning, dimension reduction, transformer models

Procedia PDF Downloads 211
1808 Incidence and Prevalence of Dry Eye Syndrome in Different Occupational Sector of Society

Authors: Vergeena Varghese, G. Gajalakshmi, Jayarajini Vasanth

Abstract:

The present study deals with the indication of prevalence of dry eye and evaluates environmental risk factors attributed to dry eye in different occupational sectors. 240 subjects above 20 years and below 45 years of age were screened for dry eye. Mcmonnies dry eye questionnaire based history and Schirmer’s test were used to diagnose dry eye. For Schirmer’s test Whatman strip and paracaine drop used as an anesthetic. Subject’s demographics include age, sex, smoking, alcoholism, occupation history and working environment. Out of a total of 240 subjects, 52 subjects were positive for dry eye syndrome (21.7%). The highest prevalence of dry eye syndrome in software sector was 14subjects (26.9%) out of a total of 40 subjects. In the construction sector, the prevalence of dry eye syndrome had 12 subjects (23.1%) out of 40 subjects and 9 subjects (17.3%) out of 40 subjects in agriculture sector. 7 subjects (13.5%) who had dry eye out of 40 subjects in the transport sector and in industrial 6 subjects (11.5%). In a normal sector, this was taken as control group had dry eye in 4 subjects (7.7%) out of 40 subjects. We also found the prevalence of dry eye in OS was higher than OD. Dry eye is a most common ocular condition. The highest prevalence of dry eye syndrome in software sector was 14 members than other sector. There was a significant correlation between environmental and occupational factors to cause dry eye. Excessive exposure to sunlight, wind, high temperature, and air pollution, electromagnetic radiation are the factors affect the tear film and ocular surface causing the dry eye syndrome.

Keywords: DES – dry eye syndrome, Mcmonnies dry eye questionnaire, schirmer’s test, whatman vstrip

Procedia PDF Downloads 468
1807 Design and Validation of a Darrieus Type Hydrokinetic Turbine for South African Irrigation Canals Experimentally and Computationally

Authors: Maritz Lourens Van Rensburg, Chantel Niebuhr

Abstract:

Utilizing all available renewable energy sources is an ever-growing necessity, this includes a newfound interest into hydrokinetic energy systems, which open the door to installations where conventional hydropower shows no potential. Optimization and obtaining high efficiencies are key in these installations. In this study a vertical axis Darrieus hydrokinetic turbine is designed and constructed to address certain drawbacks experience by axial flow horizontal axis turbines in an irrigation channel. Many horizontal axis turbines have been well developed and optimized to have high efficiencies but depending on the conditions experienced in an open channel, the performance of these turbines may be adversely affected. The study analyses how the designed vertical axis turbine addresses the problems experienced by a horizontal axis turbine while still achieving a satisfactory efficiency. To be able to optimize the vertical axis turbine, a computational fluid dynamics model was validated to the experimental results obtained from the power generated from a test turbine installation operating at various rotational speeds. It was found that an accurate validated model can be obtained through validation of generated power output.

Keywords: hydrokinetic, Darrieus, computational fluid dynamics, vertical axis turbine

Procedia PDF Downloads 116
1806 A Novel Dual Band-pass filter Based On Coupling of Composite Right/Left Hand CPW and (CSRRs) Uses Ferrite Components

Authors: Mohammed Berka, Khaled Merit

Abstract:

Recent works on microwave filters show that the constituent materials such filters are very important in the design and realization. Several solutions have been proposed to improve the qualities of filtering. In this paper, we propose a new dual band-pass filter based on the coupling of a composite (CRLH) coplanar waveguide with complementary split ring resonators (CSRRs). The (CRLH) CPW is composed of two resonators, each one has an interdigital capacitor (CID) and two short-circuited stubs parallel to top ground plane. On the lower ground plane, we use defected ground structure technology (DGS) to engrave two (CSRRs) offered with different shapes and dimensions. Between the top ground plane and the substrate, we place a ferrite layer to control the electromagnetic coupling between (CRLH) CPW and (CSRRs). The global filter that has coplanar access will have a dual band-pass behavior around the magnetic resonances of (CSRRs). Since there’s no scientific or experimental result in the literature for this kind of complicated structure, it was necessary to perform simulation using HFSS Ansoft designer.

Keywords: complementary split ring resonators, coplanar waveguide, ferrite, filter, stub.

Procedia PDF Downloads 403
1805 Conventional and Computational Investigation of the Synthesized Organotin(IV) Complexes Derived from o-Vanillin and 3-Nitro-o-Phenylenediamine

Authors: Harminder Kaur, Manpreet Kaur, Akanksha Kapila, Reenu

Abstract:

Schiff base with general formula H₂L was derived from condensation of o-vanillin and 3-nitro-o-phenylenediamine. This Schiff base was used for the synthesis of organotin(IV) complexes with general formula R₂SnL [R=Phenyl or n-octyl] using equimolar quantities. Elemental analysis UV-Vis, FTIR, and multinuclear spectroscopic techniques (¹H, ¹³C, and ¹¹⁹Sn) NMR were carried out for the characterization of the synthesized complexes. These complexes were coloured and soluble in polar solvents. Computational studies have been performed to obtain the details of the geometry and electronic structures of ligand as well as complexes. Geometry of the ligands and complexes have been optimized at the level of Density Functional Theory with B3LYP/6-311G (d,p) and B3LYP/MPW1PW91 respectively followed by vibrational frequency analysis using Gaussian 09. Observed ¹¹⁹Sn NMR chemical shifts of one of the synthesized complexes showed tetrahedral geometry around Tin atom which is also confirmed by DFT. HOMO-LUMO energy distribution was calculated. FTIR, ¹HNMR and ¹³CNMR spectra were also obtained theoretically using DFT. Further IRC calculations were employed to determine the transition state for the reaction and to get the theoretical information about the reaction pathway. Moreover, molecular docking studies can be explored to ensure the anticancer activity of the newly synthesized organotin(IV) complexes.

Keywords: DFT, molecular docking, organotin(IV) complexes, o-vanillin, 3-nitro-o-phenylenediamine

Procedia PDF Downloads 159
1804 Off-Line Detection of "Pannon Wheat" Milling Fractions by Near-Infrared Spectroscopic Methods

Authors: E. Izsó, M. Bartalné-Berceli, Sz. Gergely, A. Salgó

Abstract:

The aims of this investigation is to elaborate near-infrared methods for testing and recognition of chemical components and quality in “Pannon wheat” allied (i.e. true to variety or variety identified) milling fractions as well as to develop spectroscopic methods following the milling processes and evaluate the stability of the milling technology by different types of milling products and according to sampling times, respectively. This wheat categories produced under industrial conditions where samples were collected versus sampling time and maximum or minimum yields. The changes of the main chemical components (such as starch, protein, lipid) and physical properties of fractions (particle size) were analysed by dispersive spectrophotometers using visible (VIS) and near-infrared (NIR) regions of the electromagnetic radiation. Close correlation were obtained between the data of spectroscopic measurement techniques processed by various chemometric methods (e.g. principal component analysis (PCA), cluster analysis (CA) and operation condition of milling technology. Its obvious that NIR methods are able to detect the deviation of the yield parameters and differences of the sampling times by a wide variety of fractions, respectively. NIR technology can be used in the sensitive monitoring of milling technology.

Keywords: near infrared spectroscopy, wheat categories, milling process, monitoring

Procedia PDF Downloads 406
1803 Modeling Optimal Lipophilicity and Drug Performance in Ligand-Receptor Interactions: A Machine Learning Approach to Drug Discovery

Authors: Jay Ananth

Abstract:

The drug discovery process currently requires numerous years of clinical testing as well as money just for a single drug to earn FDA approval. For drugs that even make it this far in the process, there is a very slim chance of receiving FDA approval, resulting in detrimental hurdles to drug accessibility. To minimize these inefficiencies, numerous studies have implemented computational methods, although few computational investigations have focused on a crucial feature of drugs: lipophilicity. Lipophilicity is a physical attribute of a compound that measures its solubility in lipids and is a determinant of drug efficacy. This project leverages Artificial Intelligence to predict the impact of a drug’s lipophilicity on its performance by accounting for factors such as binding affinity and toxicity. The model predicted lipophilicity and binding affinity in the validation set with very high R² scores of 0.921 and 0.788, respectively, while also being applicable to a variety of target receptors. The results expressed a strong positive correlation between lipophilicity and both binding affinity and toxicity. The model helps in both drug development and discovery, providing every pharmaceutical company with recommended lipophilicity levels for drug candidates as well as a rapid assessment of early-stage drugs prior to any testing, eliminating significant amounts of time and resources currently restricting drug accessibility.

Keywords: drug discovery, lipophilicity, ligand-receptor interactions, machine learning, drug development

Procedia PDF Downloads 111