Search results for: chebyshev collocation method(CCA)
29 Formulation of Corrector Methods from 3-Step Hybid Adams Type Methods for the Solution of First Order Ordinary Differential Equation
Authors: Y. A. Yahaya, Ahmad Tijjani Asabe
Abstract:
This paper focuses on the formulation of 3-step hybrid Adams type method for the solution of first order differential equation (ODE). The methods which was derived on both grid and off grid points using multistep collocation schemes and also evaluated at some points to produced Block Adams type method and Adams moulton method respectively. The method with the highest order was selected to serve as the corrector. The convergence was valid and efficient. The numerical experiments were carried out and reveal that hybrid Adams type methods performed better than the conventional Adams moulton method.Keywords: adam-moulton type (amt), corrector method, off-grid, block method, convergence analysis
Procedia PDF Downloads 62628 Cubic Trigonometric B-Spline Approach to Numerical Solution of Wave Equation
Authors: Shazalina Mat Zin, Ahmad Abd. Majid, Ahmad Izani Md. Ismail, Muhammad Abbas
Abstract:
The generalized wave equation models various problems in sciences and engineering. In this paper, a new three-time level implicit approach based on cubic trigonometric B-spline for the approximate solution of wave equation is developed. The usual finite difference approach is used to discretize the time derivative while cubic trigonometric B-spline is applied as an interpolating function in the space dimension. Von Neumann stability analysis is used to analyze the proposed method. Two problems are discussed to exhibit the feasibility and capability of the method. The absolute errors and maximum error are computed to assess the performance of the proposed method. The results were found to be in good agreement with known solutions and with existing schemes in literature.Keywords: collocation method, cubic trigonometric B-spline, finite difference, wave equation
Procedia PDF Downloads 54127 A Source Point Distribution Scheme for Wave-Body Interaction Problem
Authors: Aichun Feng, Zhi-Min Chen, Jing Tang Xing
Abstract:
A two-dimensional linear wave-body interaction problem can be solved using a desingularized integral method by placing free surface Rankine sources over calm water surface and satisfying boundary conditions at prescribed collocation points on the calm water surface. A new free-surface Rankine source distribution scheme, determined by the intersection points of free surface and body surface, is developed to reduce numerical computation cost. Associated with this, a new treatment is given to the intersection point. The present scheme results are in good agreement with traditional numerical results and measurements.Keywords: source point distribution, panel method, Rankine source, desingularized algorithm
Procedia PDF Downloads 36526 Pattern Synthesis of Nonuniform Linear Arrays Including Mutual Coupling Effects Based on Gaussian Process Regression and Genetic Algorithm
Authors: Ming Su, Ziqiang Mu
Abstract:
This paper proposes a synthesis method for nonuniform linear antenna arrays that combine Gaussian process regression (GPR) and genetic algorithm (GA). In this method, the GPR model can be used to calculate the array radiation pattern in the presence of mutual coupling effects, and then the GA is used to optimize the excitations and locations of the elements so as to generate the desired radiation pattern. In this paper, taking a 9-element nonuniform linear array as an example and the desired radiation pattern corresponding to a Chebyshev distribution as the optimization objective, optimize the excitations and locations of the elements. Finally, the optimization results are verified by electromagnetic simulation software CST, which shows that the method is effective.Keywords: nonuniform linear antenna arrays, GPR, GA, mutual coupling effects, active element pattern
Procedia PDF Downloads 10925 A Corpus-Based Discourse Analysis of the Disappearance of MH370 in Malaysia and United Kingdom Newspapers: A Pilot Study
Authors: Theng Theng Ong
Abstract:
This pilot study adopts a corpus-based discourse analysis to explore the construction of Malaysia airline tragedy MH370 in the selected Malaysian and United Kingdom (UK) newspapers. Fairclough’s three-dimensional model is adopted in the study to support the corpus-based analysis. The analysis aims to determine the ways in which Malaysian Airline tragedy MH370 is linguistically defined and constructed in terms of keywords and collocation. The study also seeks to identify the types of discourse that are presented in the news articles. In addition, the differences or similarities in terms of keywords, topics or issues covered by the selected Malaysian and UK news media are examined.Keywords: corpus, CDA, newspapers, airline tragedies
Procedia PDF Downloads 30024 The EFL Mental Lexicon: Connectivity and the Acquisition of Lexical Knowledge Depth
Authors: Khalid Soussi
Abstract:
The study at hand has attempted to describe the acquisition of three EFL lexical knowledge aspects - meaning, synonymy and collocation – across three academic levels: Baccalaureate, second year and fourth year university levels in Morocco. The research also compares the development of the three lexical knowledge aspects between knowledge (reception) and use (production) and attempts to trace their order of acquisition. This has led to the use of three main data collection tasks: translation, acceptability judgment and multiple choices. The study has revealed the following findings. First, L1 and EFL mental lexicons are connected at the lexical knowledge depth. Second, such connection is active whether in language reception or use. Third, the connectivity between L1 and EFL mental lexicons tends to relatively decrease as the academic level of the learners increases. Finally, the research has revealed a significant 'order' of acquisition between the three lexical aspects, though not a very strong one.Keywords: vocabulary acquisition, EFL lexical knowledge, mental lexicon, vocabulary knowledge depth
Procedia PDF Downloads 28323 Design of Control System Based On PLC and Kingview for Granulation Product Line
Authors: Mei-Feng, Yude-Fan, Min-Zhu
Abstract:
Based on PLC and kingview, this paper proposed a method that designed a set of the automatic control system according to the craft flow and demands for granulation product line. There were the main station and subordinate stations in PLC which were communicated by PROFIBUS network. PLC and computer were communicated by Ethernet network. The conversation function between human and machine was realized by kingview software, including actual time craft flows, historic report curves and product report forms. The construction of the control system, hardware collocation and software design were introduced. Besides these, PROFIBUS network frequency conversion control, the difficult points and configuration software design were elaborated. The running results showed that there were several advantages in the control system. They were high automatic degree, perfect function, perfect steady and convenient operation.Keywords: PLC, PROFIBUS, configuration, frequency
Procedia PDF Downloads 40222 Step Height Calibration Using Hamming Window: Band-Pass Filter
Authors: Dahi Ghareab Abdelsalam Ibrahim
Abstract:
Calibration of step heights with high accuracy is needed for many applications in the industry. In general, step height consists of three bands: pass band, transition band (roll-off), and stop band. Abdelsalam used a convolution of the transfer functions of both Chebyshev type 2 and elliptic filters with WFF of the Fresnel transform in the frequency domain for producing a steeper roll-off with the removal of ripples in the pass band- and stop-bands. In this paper, we used a new method based on the Hamming window: band-pass filter for calibration of step heights in terms of perfect adjustment of pass-band, roll-off, and stop-band. The method is applied to calibrate a nominal step height of 40 cm. The step height is measured first by asynchronous dual-wavelength phase-shift interferometry. The measured step height is then calibrated by the simulation of the Hamming window: band-pass filter. The spectrum of the simulated band-pass filter is simulated at N = 881 and f0 = 0.24. We can conclude that the proposed method can calibrate any step height by adjusting only two factors which are N and f0.Keywords: optical metrology, step heights, hamming window, band-pass filter
Procedia PDF Downloads 8321 An Accurate Computation of 2D Zernike Moments via Fast Fourier Transform
Authors: Mohammed S. Al-Rawi, J. Bastos, J. Rodriguez
Abstract:
Object detection and object recognition are essential components of every computer vision system. Despite the high computational complexity and other problems related to numerical stability and accuracy, Zernike moments of 2D images (ZMs) have shown resilience when used in object recognition and have been used in various image analysis applications. In this work, we propose a novel method for computing ZMs via Fast Fourier Transform (FFT). Notably, this is the first algorithm that can generate ZMs up to extremely high orders accurately, e.g., it can be used to generate ZMs for orders up to 1000 or even higher. Furthermore, the proposed method is also simpler and faster than the other methods due to the availability of FFT software and/or hardware. The accuracies and numerical stability of ZMs computed via FFT have been confirmed using the orthogonality property. We also introduce normalizing ZMs with Neumann factor when the image is embedded in a larger grid, and color image reconstruction based on RGB normalization of the reconstructed images. Astonishingly, higher-order image reconstruction experiments show that the proposed methods are superior, both quantitatively and subjectively, compared to the q-recursive method.Keywords: Chebyshev polynomial, fourier transform, fast algorithms, image recognition, pseudo Zernike moments, Zernike moments
Procedia PDF Downloads 26520 The Construction of Malaysian Airline Tragedies in Malaysian and British Online News: A Multidisciplinary Study
Authors: Theng Theng Ong
Abstract:
This study adopts a multidisciplinary method by combining the corpus-based discourse analysis study and language attitude study to explore the construction of Malaysia airline tragedies: MH370, MH17 and QZ8501 in the selected Malaysian and United Kingdom (UK) online news. The study aims to determine the ways in which Malaysian Airline tragedies MH370, MH17 and QZ8501 are linguistically defined and constructed in terms of keyword and collocation. The study also seeks to identify the types of discourse that are presented in the new articles. The differences or similarities in terms of keywords, topics or issues covered by the selected Malaysian and UK news media will also be examined. Finally, the language attitude study will be carried out to examine the Malaysia and UK university students’ attitudes toward the keywords, topics or issues covered by the selected Malaysian and UK news media pertaining to Malaysian Airline tragedies MH370, MH17 and QZ8501. The analysis is divided into two parts with the first part focusing on corpus-based discourse analysis on the media text. The second part of the study is to investigate Malaysians and UK news readers’ attitudes towards the online news being reported by the Malaysian and UK news media pertaining to the Airline tragedies. The main findings of corpus-based discourse analysis are essential in designing the questions in the questionnaires and interview and therefore led to the identification of the attitudes among Malaysian and UK news. This study adopts a multidisciplinary method by combining the corpus-based discourse analysis study and language attitude study to explore the construction of Malaysia airline tragedies: MH370, MH17 and QZ8501 in the selected Malaysian and United Kingdom (UK) online news. The study aims to determine the ways in which Malaysian Airline tragedies MH370, MH17 and QZ8501 are linguistically defined and constructed in terms of keyword and collocation. The study also seeks to identify the types of discourse that are presented in the new articles. The differences or similarities in terms of keywords, topics or issues covered by the selected Malaysian and UK news media will also be examined. Finally, the language attitude study will be carried out to examine the Malaysia and UK university students’ attitudes toward the keywords, topics or issues covered by the selected Malaysian and UK news media pertaining to Malaysian Airline tragedies MH370, MH17 and QZ8501. The analysis is divided into two parts with the first part focusing on corpus-based discourse analysis on the media text. The second part of the study is to investigate Malaysians and UK news readers’ attitudes towards the online news being reported by the Malaysian and UK news media pertaining to the Airline tragedies. The main findings of corpus-based discourse analysis are essential in designing the questions in the questionnaires and interview and therefore led to the identification of the attitudes among Malaysian and UK news.Keywords: corpus linguistics, critical discourse analysis, news media, tragedies study
Procedia PDF Downloads 33519 Flow over an Exponentially Stretching Sheet with Hall and Cross-Diffusion Effects
Authors: Srinivasacharya Darbhasayanam, Jagadeeshwar Pashikanti
Abstract:
This paper analyzes the Soret and Dufour effects on mixed convection flow, heat and mass transfer from an exponentially stretching surface in a viscous fluid with Hall Effect. The governing partial differential equations are transformed into ordinary differential equations using similarity transformations. The nonlinear coupled ordinary differential equations are reduced to a system of linear differential equations using the successive linearization method and then solved the resulting linear system using the Chebyshev pseudo spectral method. The numerical results for the velocity components, temperature and concentration are presented graphically. The obtained results are compared with the previously published results, and are found to be in excellent agreement. It is observed from the present analysis that the primary and secondary velocities and concentration are found to be increasing, and temperature is decreasing with the increase in the values of the Soret parameter. An increase in the Dufour parameter increases both the primary and secondary velocities and temperature and decreases the concentration.Keywords: Exponentially stretching sheet, Hall current, Heat and Mass transfer, Soret and Dufour Effects
Procedia PDF Downloads 21318 Thermoelastic Analysis of a Tube Subjected to Internal Heating with Temperature Dependent Material Properties
Authors: Yasemin Kaya, Ahmet N. Eraslan
Abstract:
In this study, the thermoelastic behavior of a long tube is studied by taking into account the temperature dependency of all mechanical and thermal properties. As the tube is heated slowly, an uncoupled solution procedure is adopted under free and radially constrained boundary conditions. The nonlinear heat conduction equation is solved by a finite element collocation procedure and the corresponding distributions of stress and strain are computed by shooting iterations. The computational model is verified in comparison to the analytical solution by shutting down the temperature dependency of physical properties. In the analysis, experimental data available in the literature is used to describe the coefficient of thermal expansion $\alpha$, the thermal conductivity $k$, the modulus of rigidity $G$, the yield strength $\sigma_{0}$, and the Poisson's ratio $\nu$ of Nickel. Results of the analysis are presented in comparison to those having constant physical properties. As a result of the calculations, the temperature dependency of the material properties should be taken into account at higher temperature ranges.Keywords: thermoelasticity, long tube, temperature-dependent properties, internal heating
Procedia PDF Downloads 61317 Modelling of Aerosols in Absorption Column
Authors: Hammad Majeed, Hanna Knuutila, Magne Hillestad, Hallvard F. Svendsen
Abstract:
Formation of aerosols can cause serious complications in industrial exhaust gas cleaning processes. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this, aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. The model predicts the droplet size, the droplet internal variable profiles, and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and describes how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles.Keywords: absorption columns, aerosol formation, amine emissions, internal droplet profiles, monoethanolamine (MEA), post combustion CO2 capture, simulation
Procedia PDF Downloads 24416 Grammatical and Lexical Cohesion in the Japan’s Prime Minister Shinzo Abe’s Speech Text ‘Nihon wa Modottekimashita’
Authors: Nadya Inda Syartanti
Abstract:
This research aims to identify, classify, and analyze descriptively the aspects of grammatical and lexical cohesion in the speech text of Japan’s Prime Minister Shinzo Abe entitled Nihon wa Modotte kimashita delivered in Washington DC, the United States on February 23, 2013, as a research data source. The method used is qualitative research, which uses descriptions through words that are applied by analyzing aspects of grammatical and lexical cohesion proposed by Halliday and Hasan (1976). The aspects of grammatical cohesion consist of references (personal, demonstrative, interrogative pronouns), substitution, ellipsis, and conjunction. In contrast, lexical cohesion consists of reiteration (repetition, synonym, antonym, hyponym, meronym) and collocation. Data classification is based on the 6 aspects of the cohesion. Through some aspects of cohesion, this research tries to find out the frequency of using grammatical and lexical cohesion in Shinzo Abe's speech text entitled Nihon wa Modotte kimashita. The results of this research are expected to help overcome the difficulty of understanding speech texts in Japanese. Therefore, this research can be a reference for learners, researchers, and anyone who is interested in the field of discourse analysis.Keywords: cohesion, grammatical cohesion, lexical cohesion, speech text, Shinzo Abe
Procedia PDF Downloads 16215 Comparison of the Boundary Element Method and the Method of Fundamental Solutions for Analysis of Potential and Elasticity
Authors: S. Zenhari, M. R. Hematiyan, A. Khosravifard, M. R. Feizi
Abstract:
The boundary element method (BEM) and the method of fundamental solutions (MFS) are well-known fundamental solution-based methods for solving a variety of problems. Both methods are boundary-type techniques and can provide accurate results. In comparison to the finite element method (FEM), which is a domain-type method, the BEM and the MFS need less manual effort to solve a problem. The aim of this study is to compare the accuracy and reliability of the BEM and the MFS. This comparison is made for 2D potential and elasticity problems with different boundary and loading conditions. In the comparisons, both convex and concave domains are considered. Both linear and quadratic elements are employed for boundary element analysis of the examples. The discretization of the problem domain in the BEM, i.e., converting the boundary of the problem into boundary elements, is relatively simple; however, in the MFS, obtaining appropriate locations of collocation and source points needs more attention to obtain reliable solutions. The results obtained from the presented examples show that both methods lead to accurate solutions for convex domains, whereas the BEM is more suitable than the MFS for concave domains.Keywords: boundary element method, method of fundamental solutions, elasticity, potential problem, convex domain, concave domain
Procedia PDF Downloads 9014 Towards a Rigorous Analysis for a Supercritical Particulate Process
Authors: Yousef Bakhbakhi
Abstract:
Crystallization with supercritical fluids (SCFs), as a developed technology to produce particles of micron and sub-micron size with narrow size distribution, has found appreciable importance as an environmentally friendly technology. Particle synthesis using SCFs can be achieved employing a number of special processes involving solvent and antisolvent mechanisms. In this study, the compressed antisolvent (PCA) process is utilized as a model to analyze the theoretical complexity of crystallization with supercritical fluids. The population balance approach has proven to be an effectual technique to simulate and predict the particle size and size distribution. The nucleation and growth mechanisms of the particles formation in the PCA process is investigated using the population balance equation, which describes the evolution of the particle through coalescence and breakup levels with time. The employed mathematical population balance model contains a set of the partial differential equation with algebraic constraints, which demands a rigorous numerical approach. The combined Collocation and Galerkin finite element method are proposed as a high-resolution technique to solve the dynamics of the PCA process.Keywords: particle formation, particle size and size distribution, PCA, supercritical carbon dioxide
Procedia PDF Downloads 19713 High Order Block Implicit Multi-Step (Hobim) Methods for the Solution of Stiff Ordinary Differential Equations
Authors: J. P. Chollom, G. M. Kumleng, S. Longwap
Abstract:
The search for higher order A-stable linear multi-step methods has been the interest of many numerical analysts and has been realized through either higher derivatives of the solution or by inserting additional off step points, supper future points and the likes. These methods are suitable for the solution of stiff differential equations which exhibit characteristics that place a severe restriction on the choice of step size. It becomes necessary that only methods with large regions of absolute stability remain suitable for such equations. In this paper, high order block implicit multi-step methods of the hybrid form up to order twelve have been constructed using the multi-step collocation approach by inserting one or more off step points in the multi-step method. The accuracy and stability properties of the new methods are investigated and are shown to yield A-stable methods, a property desirable of methods suitable for the solution of stiff ODE’s. The new High Order Block Implicit Multistep methods used as block integrators are tested on stiff differential systems and the results reveal that the new methods are efficient and compete favourably with the state of the art Matlab ode23 code.Keywords: block linear multistep methods, high order, implicit, stiff differential equations
Procedia PDF Downloads 35812 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation using PINN
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary condition to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful to study various optical phenomena.Keywords: deep learning, optical Soliton, neural network, partial differential equation
Procedia PDF Downloads 12611 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation
Procedia PDF Downloads 7010 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 3099 Verification of a Simple Model for Rolling Isolation System Response
Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly
Abstract:
Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system
Procedia PDF Downloads 2508 Combining Corpus Linguistics and Critical Discourse Analysis to Study Power Relations in Hindi Newspapers
Authors: Vandana Mishra, Niladri Sekhar Dash, Jayshree Charkraborty
Abstract:
This present paper focuses on the application of corpus linguistics techniques for critical discourse analysis (CDA) of Hindi newspapers. While Corpus linguistics is the study of language as expressed in corpora (samples) of 'real world' text, CDA is an interdisciplinary approach to the study of discourse that views language as a form of social practice. CDA has mainly been studied from a qualitative perspective. However, we can say that recent studies have begun combining corpus linguistics with CDA in analyzing large volumes of text for the study of existing power relations in society. The corpus under our study is also of a sizable amount (1 million words of Hindi newspaper texts) and its analysis requires an alternative analytical procedure. So, we have combined both the quantitative approach i.e. the use of corpus techniques with CDA’s traditional qualitative analysis. In this context, we have focused on the Keyword Analysis Sorting Concordance Lines of the selected Keywords and calculating collocates of the keywords. We have made use of the Wordsmith Tool for all these analysis. The analysis starts with identifying the keywords in the political news corpus when compared with the main news corpus. The keywords are extracted from the corpus based on their keyness calculated through statistical tests like chi-squared test and log-likelihood test on the frequent words of the corpus. Some of the top occurring keywords are मोदी (Modi), भाजपा (BJP), कांग्रेस (Congress), सरकार (Government) and पार्टी (Political party). This is followed by the concordance analysis of these keywords which generates thousands of lines but we have to select few lines and examine them based on our objective. We have also calculated the collocates of the keywords based on their Mutual Information (MI) score. Both concordance and collocation help to identify lexical patterns in the political texts. Finally, all these quantitative results derived from the corpus techniques will be subjectively interpreted in accordance to the CDA’s theory to examine the ways in which political news discourse produces social and political inequality, power abuse or domination.Keywords: critical discourse analysis, corpus linguistics, Hindi newspapers, power relations
Procedia PDF Downloads 2247 Characterization and Modelling of Aerosol Droplet in Absorption Columns
Authors: Hammad Majeed, Hanna Knuutila, Magne Hillestad, Hallvard F. Svendsen
Abstract:
Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem. Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles.Keywords: absorption columns, aerosol formation, amine emissions, internal droplet profiles, monoethanolamine (MEA), post combustion CO2 capture, simulation
Procedia PDF Downloads 2466 The Direct Deconvolutional Model in the Large-Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
The utilization of Large Eddy Simulation (LES) has been extensive in turbulence research. LES concentrates on resolving the significant grid-scale motions while representing smaller scales through subfilter-scale (SFS) models. The deconvolution model, among the available SFS models, has proven successful in LES of engineering and geophysical flows. Nevertheless, the thorough investigation of how sub-filter scale dynamics and filter anisotropy affect SFS modeling accuracy remains lacking. The outcomes of LES are significantly influenced by filter selection and grid anisotropy, factors that have not been adequately addressed in earlier studies. This study examines two crucial aspects of LES: Firstly, the accuracy of direct deconvolution models (DDM) is evaluated concerning sub-filter scale (SFS) dynamics across varying filter-to-grid ratios (FGR) in isotropic turbulence. Various invertible filters are employed, including Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The importance of FGR becomes evident as it plays a critical role in controlling errors for precise SFS stress prediction. When FGR is set to 1, the DDM models struggle to faithfully reconstruct SFS stress due to inadequate resolution of SFS dynamics. Notably, prediction accuracy improves when FGR is set to 2, leading to accurate reconstruction of SFS stress, except for cases involving Helmholtz I and II filters. Remarkably high precision, nearly 100%, is achieved at an FGR of 4 for all DDM models. Furthermore, the study extends to filter anisotropy and its impact on SFS dynamics and LES accuracy. By utilizing the dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with anisotropic filters, aspect ratios (AR) ranging from 1 to 16 are examined in LES filters. The results emphasize the DDM’s proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. Notably high correlation coefficients exceeding 90% are observed in the a priori study for the DDM’s reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as filter anisotropy increases. In the a posteriori analysis, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, including velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strainrate tensors, and SFS stress. It is evident that as filter anisotropy intensifies, the results of DSM and DMM deteriorate, while the DDM consistently delivers satisfactory outcomes across all filter-anisotropy scenarios. These findings underscore the potential of the DDM framework as a valuable tool for advancing the development of sophisticated SFS models for LES in turbulence research.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 765 Characterization of Aerosol Droplet in Absorption Columns to Avoid Amine Emissions
Authors: Hammad Majeed, Hanna Knuutila, Magne Hilestad, Hallvard Svendsen
Abstract:
Formation of aerosols can cause serious complications in industrial exhaust gas CO2 capture processes. SO3 present in the flue gas can cause aerosol formation in an absorption based capture process. Small mist droplets and fog formed can normally not be removed in conventional demisting equipment because their submicron size allows the particles or droplets to follow the gas flow. As a consequence of this aerosol based emissions in the order of grams per Nm3 have been identified from PCCC plants. In absorption processes aerosols are generated by spontaneous condensation or desublimation processes in supersaturated gas phases. Undesired aerosol development may lead to amine emissions many times larger than what would be encountered in a mist free gas phase in PCCC development. It is thus of crucial importance to understand the formation and build-up of these aerosols in order to mitigate the problem.Rigorous modelling of aerosol dynamics leads to a system of partial differential equations. In order to understand mechanics of a particle entering an absorber an implementation of the model is created in Matlab. The model predicts the droplet size, the droplet internal variable profiles and the mass transfer fluxes as function of position in the absorber. The Matlab model is based on a subclass method of weighted residuals for boundary value problems named, orthogonal collocation method. The model comprises a set of mass transfer equations for transferring components and the essential diffusion reaction equations to describe the droplet internal profiles for all relevant constituents. Also included is heat transfer across the interface and inside the droplet. This paper presents results describing the basic simulation tool for the characterization of aerosols formed in CO2 absorption columns and gives examples as to how various entering droplets grow or shrink through an absorber and how their composition changes with respect to time. Below are given some preliminary simulation results for an aerosol droplet composition and temperature profiles. Results: As an example a droplet of initial size of 3 microns, initially containing a 5M MEA, solution is exposed to an atmosphere free of MEA. Composition of the gas phase and temperature is changing with respect to time throughout the absorber.Keywords: amine solvents, emissions, global climate change, simulation and modelling, aerosol generation
Procedia PDF Downloads 2654 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence
Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang
Abstract:
Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sublfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of filters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-filter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying filter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The significance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II filters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the filter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic filter, aspect ratios (AR) ranging from 1 to 16 in LES filters are evaluated. The findings highlight the DDM's proficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as filter anisotropy intensify, the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all filter-anisotropy scenarios. The findings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence
Procedia PDF Downloads 753 Historical Development of Negative Emotive Intensifiers in Hungarian
Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges
Abstract:
In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time
Procedia PDF Downloads 2332 Applying Image Schemas and Cognitive Metaphors to Teaching/Learning Italian Preposition a in Foreign/Second Language Context
Authors: Andrea Fiorista
Abstract:
The learning of prepositions is a quite problematic aspect in foreign language instruction, and Italian is certainly not an exception. In their prototypical function, prepositions express schematic relations of two entities in a highly abstract, typically image-schematic way. In other terms, prepositions assume concepts such as directionality, collocation of objects in space and time and, in Cognitive Linguistics’ terms, the position of a trajector with respect to a landmark. Learners of different native languages may conceptualize them differently, implying that they are supposed to operate a recategorization (or create new categories) fitting with the target language. However, most current Italian Foreign/Second Language handbooks and didactic grammars do not facilitate learners in carrying out the task, as they tend to provide partial and idiosyncratic descriptions, with the consequent learner’s effort to memorize them, most of the time without success. In their prototypical meaning, prepositions are used to specify precise topographical positions in the physical environment which become less and less accurate as they radiate out from what might be termed a concrete prototype. According to that, the present study aims to elaborate a cognitive and conceptually well-grounded analysis of some extensive uses of the Italian preposition a, in order to propose effective pedagogical solutions in the Teaching/Learning process. Image schemas, cognitive metaphors and embodiment represent efficient cognitive tools in a task like this. Actually, while learning the merely spatial use of the preposition a (e.g. Sono a Roma = I am in Rome; vado a Roma = I am going to Rome,…) is quite straightforward, it is more complex when a appears in constructions such as verbs of motion +a + infinitive (e.g. Vado a studiare = I am going to study), inchoative periphrasis (e.g. Tra poco mi metto a leggere = In a moment I will read), causative construction (e.g. Lui mi ha mandato a lavorare = He sent me to work). The study reports data from a teaching intervention of Focus on Form, in which a basic cognitive schema is used to facilitate both teachers and students to respectively explain/understand the extensive uses of a. The educational material employed translates Cognitive Linguistics’ theoretical assumptions, such as image schemas and cognitive metaphors, into simple images or proto-scenes easily comprehensible for learners. Illustrative material, indeed, is supposed to make metalinguistic contents more accessible. Moreover, the concept of embodiment is pedagogically applied through activities including motion and learners’ bodily involvement. It is expected that replacing rote learning with a methodology that gives grammatical elements a proper meaning, makes learning process more effective both in the short and long term.Keywords: cognitive approaches to language teaching, image schemas, embodiment, Italian as FL/SL
Procedia PDF Downloads 871 Identification of Text Domains and Register Variation through the Analysis of Lexical Distribution in a Bangla Mass Media Text Corpus
Authors: Mahul Bhattacharyya, Niladri Sekhar Dash
Abstract:
The present research paper is an experimental attempt to investigate the nature of variation in the register in three major text domains, namely, social, cultural, and political texts collected from the corpus of Bangla printed mass media texts. This present study uses a corpus of a moderate amount of Bangla mass media text that contains nearly one million words collected from different media sources like newspapers, magazines, advertisements, periodicals, etc. The analysis of corpus data reveals that each text has certain lexical properties that not only control their identity but also mark their uniqueness across the domains. At first, the subject domains of the texts are classified into two parameters namely, ‘Genre' and 'Text Type'. Next, some empirical investigations are made to understand how the domains vary from each other in terms of lexical properties like both function and content words. Here the method of comparative-cum-contrastive matching of lexical load across domains is invoked through word frequency count to track how domain-specific words and terms may be marked as decisive indicators in the act of specifying the textual contexts and subject domains. The study shows that the common lexical stock that percolates across all text domains are quite dicey in nature as their lexicological identity does not have any bearing in the act of specifying subject domains. Therefore, it becomes necessary for language users to anchor upon certain domain-specific lexical items to recognize a text that belongs to a specific text domain. The eventual findings of this study confirm that texts belonging to different subject domains in Bangla news text corpus clearly differ on the parameters of lexical load, lexical choice, lexical clustering, lexical collocation. In fact, based on these parameters, along with some statistical calculations, it is possible to classify mass media texts into different types to mark their relation with regard to the domains they should actually belong. The advantage of this analysis lies in the proper identification of the linguistic factors which will give language users a better insight into the method they employ in text comprehension, as well as construct a systemic frame for designing text identification strategy for language learners. The availability of huge amount of Bangla media text data is useful for achieving accurate conclusions with a certain amount of reliability and authenticity. This kind of corpus-based analysis is quite relevant for a resource-poor language like Bangla, as no attempt has ever been made to understand how the structure and texture of Bangla mass media texts vary due to certain linguistic and extra-linguistic constraints that are actively operational to specific text domains. Since mass media language is assumed to be the most 'recent representation' of the actual use of the language, this study is expected to show how the Bangla news texts reflect the thoughts of the society and how they leave a strong impact on the thought process of the speech community.Keywords: Bangla, corpus, discourse, domains, lexical choice, mass media, register, variation
Procedia PDF Downloads 174