Search results for: computational electromagnetic
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2494

Search results for: computational electromagnetic

1504 The Effect of Artificial Intelligence on Electric Machines and Welding

Authors: Mina Malak Zakaria Henin

Abstract:

The finite detail evaluation of magnetic fields in electromagnetic devices shows that the machine cores revel in extraordinary flux patterns consisting of alternating and rotating fields. The rotating fields are generated in different configurations variety, among circular and elliptical, with distinctive ratios between the fundamental and minor axes of the flux locus. Experimental measurements on electrical metal uncovered one-of-a-kind flux patterns that divulge distinctive magnetic losses in the samples below the test. Therefore, electric machines require unique interest throughout the core loss calculation technique to bear in mind the flux styles. In this look, a circular rotational unmarried sheet tester is employed to measure the middle losses in the electric-powered metallic pattern of M36G29. The sample becomes exposed to alternating fields, circular areas, and elliptical fields with axis ratios of zero.2, zero. Four, 0.6 and 0.8. The measured statistics changed into applied on 6-4 switched reluctance motors at 3 distinctive frequencies of interest to the industry 60 Hz, 400 Hz, and 1 kHz. The effects reveal an excessive margin of error, which can arise at some point in the loss calculations if the flux pattern difficulty is overlooked. The mistake in exceptional components of the gadget associated with considering the flux styles may be around 50%, 10%, and a couple of at 60Hz, 400Hz, and 1 kHz, respectively. The future paintings will focus on the optimization of gadget geometrical shape, which has a primary effect on the flux sample on the way to decrease the magnetic losses in system cores.

Keywords: converters, electric machines, MEA (more electric aircraft), PES (power electronics systems) synchronous machine, vector control Multi-machine/ Multi-inverter, matrix inverter, Railway tractionalternating core losses, finite element analysis, rotational core losses

Procedia PDF Downloads 28
1503 Design, Synthesis, and Catalytic Applications of Functionalized Metal Complexes and Nanomaterials for Selective Oxidation and Coupling Reactions

Authors: Roghaye Behroozi

Abstract:

The development of functionalized metal complexes and nanomaterials has gained significant attention due to their potential in catalyzing selective oxidation and coupling reactions. These catalysts play a crucial role in various industrial and pharmaceutical processes, enhancing the efficiency, selectivity, and sustainability of chemical reactions. This research aims to design and synthesize new functionalized metal complexes and nanomaterials to explore their catalytic applications in the selective oxidation of alcohols and coupling reactions, focusing on improving yield, selectivity, and catalyst reusability. The study involves the synthesis of a nickel Schiff base complex stabilized within 41-MCM as a heterogeneous catalyst. A Schiff base ligand derived from glycine was used to create a tin (IV) metal complex characterized through spectroscopic techniques and computational analysis. Additionally, iron-based magnetic nanoparticles functionalized with melamine were synthesized for catalytic evaluation. Lastly, a palladium (IV) complex was prepared, and its oxidative stability was analyzed. The nickel Schiff base catalyst showed high selectivity in converting primary and secondary alcohols to aldehydes and ketones, with yields ranging from 73% to 90%. The tin (IV) complex demonstrated accurate structural and electronic properties, with consistent results between experimental and computational data. The melamine-functionalized iron nanoparticles exhibited efficient catalytic activity in producing triazoles, with enhanced reaction speed and reusability. The palladium (IV) complex displayed remarkable stability and low reactivity towards C–C bond formation due to its symmetrical structure. The synthesized metal complexes and nanomaterials demonstrated significant potential as efficient, selective, and reusable catalysts for oxidation and coupling reactions. These findings pave the way for developing environmentally friendly and cost-effective catalytic systems for industrial applications.

Keywords: catalysts, Schiff base complexes, metal-organic frameworks, oxidation reactions, nanoparticles, reusability

Procedia PDF Downloads 15
1502 Assessment of the Performance of the Sonoreactors Operated at Different Ultrasound Frequencies, to Remove Pollutants from Aqueous Media

Authors: Gabriela Rivadeneyra-Romero, Claudia del C. Gutierrez Torres, Sergio A. Martinez-Delgadillo, Victor X. Mendoza-Escamilla, Alejandro Alonzo-Garcia

Abstract:

Ultrasonic degradation is currently being used in sonochemical reactors to degrade pollutant compounds from aqueous media, as emerging contaminants (e.g. pharmaceuticals, drugs and personal care products.) because they can produce possible ecological impacts on the environment. For this reason, it is important to develop appropriate water and wastewater treatments able to reduce pollution and increase reuse. Pollutants such as textile dyes, aromatic and phenolic compounds, cholorobenzene, bisphenol-A and carboxylic acid and other organic pollutants, can be removed from wastewaters by sonochemical oxidation. The effect on the removal of pollutants depends on the type of the ultrasonic frequency used; however, not much studies have been done related to the behavior of the fluid into the sonoreactors operated at different ultrasonic frequencies. Based on the above, it is necessary to study the hydrodynamic behavior of the liquid generated by the ultrasonic irradiation to design efficient sonoreactors to reduce treatment times and costs. In this work, it was studied the hydrodynamic behavior of the fluid in sonochemical reactors at different frequencies (250 kHz, 500 kHz and 1000 kHz). The performances of the sonoreactors at those frequencies were simulated using computational fluid dynamics (CFD). Due to there is great sound speed gradient between piezoelectric and fluid, k-e models were used. Piezoelectric was defined as a vibration surface, to evaluate the different frequencies effect on the fluid into sonochemical reactor. Structured hexahedral cells were used to mesh the computational liquid domain, and fine triangular cells were used to mesh the piezoelectric transducers. Unsteady state conditions were used in the solver. Estimation of the dissipation rate, flow field velocities, Reynolds stress and turbulent quantities were evaluated by CFD and 2D-PIV measurements. Test results show that there is no necessary correlation between an increase of the ultrasonic frequency and the pollutant degradation, moreover, the reactor geometry and power density are important factors that should be considered in the sonochemical reactor design.

Keywords: CFD, reactor, ultrasound, wastewater

Procedia PDF Downloads 190
1501 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 41
1500 Terahertz Glucose Sensors Based on Photonic Crystal Pillar Array

Authors: S. S. Sree Sanker, K. N. Madhusoodanan

Abstract:

Optical biosensors are dominant alternative for traditional analytical methods, because of their small size, simple design and high sensitivity. Photonic sensing method is one of the recent advancing technology for biosensors. It measures the change in refractive index which is induced by the difference in molecular interactions due to the change in concentration of the analyte. Glucose is an aldosic monosaccharide, which is a metabolic source in many of the organisms. The terahertz waves occupies the space between infrared and microwaves in the electromagnetic spectrum. Terahertz waves are expected to be applied to various types of sensors for detecting harmful substances in blood, cancer cells in skin and micro bacteria in vegetables. We have designed glucose sensors using silicon based 1D and 2D photonic crystal pillar arrays in terahertz frequency range. 1D photonic crystal has rectangular pillars with height 100 µm, length 1600 µm and width 50 µm. The array period of the crystal is 500 µm. 2D photonic crystal has 5×5 cylindrical pillar array with an array period of 75 µm. Height and diameter of the pillar array are 160 µm and 100 µm respectively. Two samples considered in the work are blood and glucose solution, which are labelled as sample 1 and sample 2 respectively. The proposed sensor detects the concentration of glucose in the samples from 0 to 100 mg/dL. For this, the crystal was irradiated with 0.3 to 3 THz waves. By analyzing the obtained S parameter, the refractive index of the crystal corresponding to the particular concentration of glucose was measured using the parameter retrieval method. Refractive indices of the two crystals decreased gradually with the increase in concentration of glucose in the sample. For 1D photonic crystals, a gradual decrease in refractive index was observed at 1 THz. 2D photonic crystal showed this behavior at 2 THz. The proposed sensor was simulated using CST Microwave studio. This will enable us to develop a model which can be used to characterize a glucose sensor. The present study is expected to contribute to blood glucose monitoring.

Keywords: CST microwave studio, glucose sensor, photonic crystal, terahertz waves

Procedia PDF Downloads 281
1499 Multi-Functional Metal Oxides as Gas Sensors, Photo-Catalysts and Bactericides

Authors: Koyar Rane

Abstract:

Nano- to submicron size particles of narrow particle size distribution of semi-conducting TiO₂, ZnO, NiO, CuO, Fe₂O₃ have been synthesized by novel hydrazine method and tested for their gas sensing, photocatalytic and bactericidal activities and the behavior found to be enhanced when the oxides in the thin film forms, that obtained in a specially built spray pyrolysis reactor. Hydrazine method is novel in the sense, say, the UV absorption edge of the white pigment grade wide band gap (~3.2eV) TiO₂ and ZnO shifted to the visible region turning into yellowish particles, indicating modification occurring the band structure. The absorption in the visible region makes these oxides visible light sensitive photocatalysis in degrading pollutants, especially the organic dyes which otherwise increase the chemical oxygen demand of the drinking water, enabling the process feasible not under the harsh energetic UV radiation regime. The electromagnetic radiations on irradiation produce electron-hole pairs Semiconductor + hν → e⁻ + h⁺ The electron-hole pairs thus produced form Reactive Oxygen Species, ROS, on the surface of the semiconductors, O₂(adsorbed)+e⁻ → O₂• - superoxide ion OH-(surface)+h⁺ →•OH - Hydroxyl radical The ROS attack the organic material and micro-organisms. Our antibacterial studies indicate the metal oxides control the Biological Oxygen Demand (BOD) of drinking water which had beyond the safe level normally found in the municipal supply. Metal oxides in the thin film form show overall enhanced properties and the films are reusable. The results of the photodegradation and antibactericidal studies are discussed. Gas sensing studies too have been done to find the versatility of the multifunctional metal oxides.

Keywords: hydrazine method, visible light sensitive, photo-degradation of dyes, water/airborne pollutant

Procedia PDF Downloads 163
1498 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model

Authors: Mostafa Zandi, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and  equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.

Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function

Procedia PDF Downloads 77
1497 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 290
1496 Low-Cost Wireless Power Transfer System for Smart Recycling Containers

Authors: Juan Luis Leal, Rafael Maestre, Ovidio López

Abstract:

As innovation progresses, more possibilities are made available to increase the efficiency and reach of solutions for Smart Cities, most of which require the data provided by the Internet of Things (IoT) devices and may even have higher power requirements such as motors or actuators. A reliable power supply with the lowest maintenance is a requirement for the success of these solutions in the long term. Energy harvesting, mainly solar, becomes the solution of choice in most cases, but only if there is enough power to be harvested, which may depend on the device location (e.g., outdoors vs. indoor). This is the case of Smart Waste Containers with compaction systems, which have moderately high-power requirements, and may be installed in places with little sunlight for solar generation. It should be noted that waste is unloaded from the containers with cranes, so sudden and irregular movements may happen, making wired power unviable. In these cases, a wireless power supply may be a great alternative. This paper proposes a cost-effective two coil resonant wireless power transfer (WPT) system and describes its implementation, which has been carried out within an R&D project and validated in real settings with smart containers. Experimental results prove that the developed system achieves wireless power transmission up to 35W in the range of 5 cm to 1 m with a peak efficiency of 78%. The circuit is operated at relatively low resonant frequencies, which combined with enough wire-to-wire separation between the coil windings, reduce the losses caused by the proximity effect and, therefore, allow the use of common stranded wire instead of Litz wire, this without reducing the efficiency significantly. All these design considerations led to a final system that achieves a high efficiency for the desired charging range, simplifying the energy supply for Smart Containers as well as other devices that may benefit from a cost-effective wireless charging system.

Keywords: electromagnetic coupling, resonant wireless charging, smart recycling containers, wireless power transfer

Procedia PDF Downloads 93
1495 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 318
1494 Numerical Solutions of Generalized Burger-Fisher Equation by Modified Variational Iteration Method

Authors: M. O. Olayiwola

Abstract:

Numerical solutions of the generalized Burger-Fisher are obtained using a Modified Variational Iteration Method (MVIM) with minimal computational efforts. The computed results with this technique have been compared with other results. The present method is seen to be a very reliable alternative method to some existing techniques for such nonlinear problems.

Keywords: burger-fisher, modified variational iteration method, lagrange multiplier, Taylor’s series, partial differential equation

Procedia PDF Downloads 430
1493 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 202
1492 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 205
1491 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System

Authors: Nicolas M. Beleski, Gustavo A. G. Lugo

Abstract:

Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.

Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind

Procedia PDF Downloads 132
1490 A Multistep Broyden’s-Type Method for Solving Systems of Nonlinear Equations

Authors: M. Y. Waziri, M. A. Aliyu

Abstract:

The paper proposes an approach to improve the performance of Broyden’s method for solving systems of nonlinear equations. In this work, we consider the information from two preceding iterates rather than a single preceding iterate to update the Broyden’s matrix that will produce a better approximation of the Jacobian matrix in each iteration. The numerical results verify that the proposed method has clearly enhanced the numerical performance of Broyden’s Method.

Keywords: mulit-step Broyden, nonlinear systems of equations, computational efficiency, iterate

Procedia PDF Downloads 638
1489 Numerical Evolution Methods of Rational Form for Diffusion Equations

Authors: Said Algarni

Abstract:

The purpose of this study was to investigate selected numerical methods that demonstrate good performance in solving PDEs. We adapted alternative method that involve rational polynomials. Padé time stepping (PTS) method, which is highly stable for the purposes of the present application and is associated with lower computational costs, was applied. Furthermore, PTS was modified for our study which focused on diffusion equations. Numerical runs were conducted to obtain the optimal local error control threshold.

Keywords: Padé time stepping, finite difference, reaction diffusion equation, PDEs

Procedia PDF Downloads 299
1488 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 453
1487 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel

Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler

Abstract:

Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.

Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process

Procedia PDF Downloads 135
1486 RANS Simulation of the LNG Ship Squat in Shallow Water

Authors: Mehdi Nakisa, Adi Maimun, Yasser M. Ahmed, Fatemeh Behrouzi

Abstract:

Squat is the reduction in under-keel clearance between a vessel at-rest and underway due to the increased flow of water past the moving body. The forward motion of the ship induces a relative velocity between the ship and the surrounding water that causes a water level depression in which the ship sinks. The problem of ship squat is one among the crucial factors affecting the navigation of ships in restricted waters. This article investigates the LNG ship squat, its effects on flow streamlines around the ship hull and ship behavior and motion using computational fluid dynamics which is applied by Ansys-Fluent.

Keywords: ship squat, CFD, confined, mechanic

Procedia PDF Downloads 620
1485 Computational Analysis of Adaptable Winglets for Improved Morphing Aircraft Performance

Authors: Erdogan Kaygan, Alvin Gatto

Abstract:

An investigation of adaptable winglets for enhancing morphing aircraft performance is described in this paper. The concepts investigated consist of various winglet configurations fundamentally centered on a baseline swept wing. The impetus for the work was to identify and optimize winglets to enhance the aerodynamic efficiency of a morphing aircraft. All computations were performed with Athena Vortex Lattice modelling with varying degrees of twist and cant angle considered. The results from this work indicate that if adaptable winglets were employed on aircraft’s improvements in aircraft performance could be achieved.

Keywords: aircraft, drag, twist, winglet

Procedia PDF Downloads 584
1484 3D CFD Modelling of the Airflow and Heat Transfer in Cold Room Filled with Dates

Authors: Zina Ghiloufi, Tahar Khir

Abstract:

A transient three-dimensional computational fluid dynamics (CFD) model is developed to determine the velocity and temperature distribution in different positions cold room during pre-cooling of dates. The turbulence model used is the k-ω Shear Stress Transport (SST) with the standard wall function, the air. The numerical results obtained show that cooling rate is not uniform inside the room; the product at the medium of room has a slower cooling rate. This cooling heterogeneity has a large effect on the energy consumption during cold storage.

Keywords: CFD, cold room, cooling rate, dDates, numerical simulation, k-ω (SST)

Procedia PDF Downloads 235
1483 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 393
1482 Time Domain Dielectric Relaxation Microwave Spectroscopy

Authors: A. C. Kumbharkhane

Abstract:

Time domain dielectric relaxation microwave spectroscopy (TDRMS) is a term used to describe a technique of observing the time dependant response of a sample after application of time dependant electromagnetic field. A TDRMS probes the interaction of a macroscopic sample with a time dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the TDRS technique covers an extensive dynamical process. The corresponding frequencies range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy, which yield information on the motions of individual molecules. Recently, we have developed and established the TDR technique in laboratory that provides information regarding dielectric permittivity in the frequency range 10 MHz to 30 GHz. The TDR method involves the generation of step pulse with rise time of 20 pico-seconds in a coaxial line system and monitoring the change in pulse shape after reflection from the sample placed at the end of the coaxial line. There is a great interest to study the dielectric relaxation behaviour in liquid systems to understand the role of hydrogen bond in liquid system. The intermolecular interaction through hydrogen bonds in molecular liquids results in peculiar dynamical properties. The dynamics of hydrogen-bonded liquids have been studied. The theoretical model to explain the experimental results will be discussed.

Keywords: microwave, time domain reflectometry (TDR), dielectric measurement, relaxation time

Procedia PDF Downloads 336
1481 Unveiling the Reaction Mechanism of N-Nitroso Dimethyl Amine Formation from Substituted Hydrazine Derivatives During Ozonation: A Computational Study

Authors: Rehin Sulay, Anandhu Krishna, Jintumol Mathew, Vibin Ipe Thomas

Abstract:

N-Nitrosodimethyl amine, the simplest member of the N-Nitrosoamine family, is a carcinogenic and mutagenic agent that has gained considerable research interest owing to its toxic nature. Ozonation of industrially important hydrazines such as unsymmetrical dimethylhydrazine (UDMH) or monomethylhydrazine (MMH) has been associated with NDMA formation and accumulation in the environment. UDMH/MMH - ozonation also leads to several other transformation products such as acetaldehyde dimethyl hydrazone (ADMH), tetramethyl tetra azene (TMT), diazomethane, methyl diazene, etc, which can be either precursors or competitors for NDMA formation.In this work, we explored the formation mechanism of ADMH and TMT from UDMH-ozonation and their further oxidation to NDMA using the second-order Moller Plesset perturbation theory employing the 6-311G(d) basis set. We have also investigated how MMH selectively forms methyl diazene and diazomethane under normal conditions and NDMA in the presence of excess ozone. Our calculations indicate that the reactions proceed via an initial H abstraction from the hydrazine –NH2 group followed by the oxidation of the generated N-radical species. The formation of ADMH from the UDMH-ozone reaction involves an acetaldehyde intermediate, which then reacts with a second UDMH molecule to generate ADMH. The preferable attack of ozone molecule on N=C bond of ADMH generates DMAN intermediate, which subsequently undergoes oxidation to form NDMA. Unlike other transformation products, TMT formation occurs via the dimerization of DMAN. Though there exist a N=N bonds in the TMT, which are preferable attacking sites for ozone, experimental studies show the lower yields of NDMA formation, which corroborates with the high activation barrier required for the process(42kcal/mol).Overall, our calculated results agree well with the experimental observations and rate constants. Computational calculations bring insights into the electronic nature and kinetics of the elementary reactions of this pathway, enabled by computed energies of structures that are not possible to access experimentally.

Keywords: reaction mechanism, ozonation, substituted hydrazine, transition state

Procedia PDF Downloads 82
1480 Analysis of the Secondary Stationary Flow Around an Oscillating Circular Cylinder

Authors: Artem Nuriev, Olga Zaitseva

Abstract:

This paper is devoted to the study of a viscous incompressible flow around a circular cylinder performing harmonic oscillations, especially the steady streaming phenomenon. The research methodology is based on the asymptotic explanation method combined with the computational bifurcation analysis. Present studies allow to identify several regimes of the secondary streaming with different flow structures. The results of the research are in good agreement with experimental and numerical simulation data.

Keywords: oscillating cylinder, secondary streaming, flow regimes, asymptotic and bifurcation analysis

Procedia PDF Downloads 435
1479 Study of the Design and Simulation Work for an Artificial Heart

Authors: Mohammed Eltayeb Salih Elamin

Abstract:

This study discusses the concept of the artificial heart using engineering concepts, of the fluid mechanics and the characteristics of the non-Newtonian fluid. For the purpose to serve heart patients and improve aspects of their lives and since the Statistics review according to world health organization (WHO) says that heart disease and blood vessels are the first cause of death in the world. Statistics shows that 30% of the death cases in the world by the heart disease, so simply we can consider it as the number one leading cause of death in the entire world is heart failure. And since the heart implantation become a very difficult and not always available, the idea of the artificial heart become very essential. So it’s important that we participate in the developing this idea by searching and finding the weakness point in the earlier designs and hoping for improving it for the best of humanity. In this study a pump was designed in order to pump blood to the human body and taking into account all the factors that allows it to replace the human heart, in order to work at the same characteristics and the efficiency of the human heart. The pump was designed on the idea of the diaphragm pump. Three models of blood obtained from the blood real characteristics and all of these models were simulated in order to study the effect of the pumping work on the fluid. After that, we study the properties of this pump by using Ansys15 software to simulate blood flow inside the pump and the amount of stress that it will go under. The 3D geometries modeling was done using SOLID WORKS and the geometries then imported to Ansys design modeler which is used during the pre-processing procedure. The solver used throughout the study is Ansys FLUENT. This is a tool used to analysis the fluid flow troubles and the general well-known term used for this branch of science is known as Computational Fluid Dynamics (CFD). Basically, Design Modeler used during the pre-processing procedure which is a crucial step before the start of the fluid flow problem. Some of the key operations are the geometry creations which specify the domain of the fluid flow problem. Next is mesh generation which means discretization of the domain to solve governing equations at each cell and later, specify the boundary zones to apply boundary conditions for the problem. Finally, the pre–processed work will be saved at the Ansys workbench for future work continuation.

Keywords: Artificial heart, computational fluid dynamic heart chamber, design, pump

Procedia PDF Downloads 459
1478 Problems of Boolean Reasoning Based Biclustering Parallelization

Authors: Marcin Michalak

Abstract:

Biclustering is the way of two-dimensional data analysis. For several years it became possible to express such issue in terms of Boolean reasoning, for processing continuous, discrete and binary data. The mathematical backgrounds of such approach — proved ability of induction of exact and inclusion–maximal biclusters fulfilling assumed criteria — are strong advantages of the method. Unfortunately, the core of the method has quite high computational complexity. In the paper the basics of Boolean reasoning approach for biclustering are presented. In such context the problems of computation parallelization are risen.

Keywords: Boolean reasoning, biclustering, parallelization, prime implicant

Procedia PDF Downloads 125
1477 Bee Colony Optimization Applied to the Bin Packing Problem

Authors: Kenza Aida Amara, Bachir Djebbar

Abstract:

We treat the two-dimensional bin packing problem which involves packing a given set of rectangles into a minimum number of larger identical rectangles called bins. This combinatorial problem is NP-hard. We propose a pretreatment for the oriented version of the problem that allows the valorization of the lost areas in the bins and the reduction of the size problem. A heuristic method based on the strategy first-fit adapted to this problem is presented. We present an approach of resolution by bee colony optimization. Computational results express a comparison of the number of bins used with and without pretreatment.

Keywords: bee colony optimization, bin packing, heuristic algorithm, pretreatment

Procedia PDF Downloads 633
1476 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor

Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez

Abstract:

Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.

Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste

Procedia PDF Downloads 114
1475 Grammar as a Logic of Labeling: A Computer Model

Authors: Jacques Lamarche, Juhani Dickinson

Abstract:

This paper introduces a computational model of a Grammar as Logic of Labeling (GLL), where the lexical primitives of morphosyntax are phonological matrixes, the form of words, understood as labels that apply to realities (or targets) assumed to be outside of grammar altogether. The hypothesis is that even though a lexical label relates to its target arbitrarily, this label in a complex (constituent) label is part of a labeling pattern which, depending on its value (i.e., N, V, Adj, etc.), imposes language-specific restrictions on what it targets outside of grammar (in the world/semantics or in cognitive knowledge). Lexical forms categorized as nouns, verbs, adjectives, etc., are effectively targets of labeling patterns in use. The paper illustrates GLL through a computer model of basic patterns in English NPs. A constituent label is a binary object that encodes: i) alignment of input forms so that labels occurring at different points in time are understood as applying at once; ii) endocentric structuring - every grammatical constituent has a head label that determines the target of the constituent, and a limiter label (the non-head) that restricts this target. The N or A values are restricted to limiter label, the two differing in terms of alignment with a head. Consider the head initial DP ‘the dog’: the label ‘dog’ gets an N value because it is a limiter that is evenly aligned with the head ‘the’, restricting application of the DP. Adapting a traditional analysis of ‘the’ to GLL – apply label to something familiar – the DP targets and identifies one reality familiar to participants by applying to it the label ‘dog’ (singular). Consider next the DP ‘the large dog’: ‘large dog’ is nominal by even alignment with ‘the’, as before, and since ‘dog’ is the head of (head final) ‘large dog’, it is also nominal. The label ‘large’, however, is adjectival by narrow alignment with the head ‘dog’: it doesn’t target the head but targets a property of what dog applies to (a property or value of attribute). In other words, the internal composition of constituents determines that a form targets a property or a reality: ‘large’ and ‘dog’ happen to be valid targets to realize this constituent. In the presentation, the computer model of the analysis derives the 8 possible sequences of grammatical values with three labels after the determiner (the x y z): 1- D [ N [ N N ]]; 2- D [ A [ N N ] ]; 3- D [ N [ A N ] ]; 4- D [ A [ A N ] ]; 5- D [ [ N N ] N ]; 5- D [ [ A N ] N ]; 6- D [ [ N A ] N ] 7- [ [ N A ] N ] 8- D [ [ Adv A ] N ]. This approach that suggests that a computer model of these grammatical patterns could be used to construct ontologies/knowledge using speakers’ judgments about the validity of lexical meaning in grammatical patterns.

Keywords: syntactic theory, computational linguistics, logic and grammar, semantics, knowledge and grammar

Procedia PDF Downloads 38