Search results for: numerical code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4630

Search results for: numerical code

490 The Role of Metaphor in Communication

Authors: Fleura Shkëmbi, Valbona Treska

Abstract:

In elementary school, we discover that a metaphor is a decorative linguistic device just for poets. But now that we know, it's also a crucial tactic that individuals employ to understand the universe, from fundamental ideas like time and causation to the most pressing societal challenges today. Metaphor is the use of language to refer to something other than what it was originally intended for or what it "literally" means in order to suggest a similarity or establish a connection between the two. People do not identify metaphors as relevant in their decisions, according to a study on metaphor and its effect on decision-making; instead, they refer to more "substantive" (typically numerical) facts as the basis for their problem-solving decision. Every day, metaphors saturate our lives via language, cognition, and action. They argue that our conceptions shape our views and interactions with others and that concepts define our reality. Metaphor is thus a highly helpful tool for both describing our experiences to others and forming notions for ourselves. In therapeutic contexts, their shared goal appears to be twofold. The cognitivist approach to metaphor regards it as one of the fundamental foundations of human communication. The benefits and disadvantages of utilizing the metaphor differ depending on the target domain that the metaphor portrays. The challenge of creating messages and surroundings that affect customers' notions of abstract ideas in a variety of industries, including health, hospitality, romance, and money, has been studied for decades in marketing and consumer psychology. The aim of this study is to examine, through a systematic literature review, the role of the metaphor in communication and in advertising. This study offers a selected analysis of this literature, concentrating on research on customer attitudes and product appraisal. The analysis of the data identifies potential research questions. With theoretical and applied implications for marketing, design, and persuasion, this study sheds light on how, when, and for whom metaphoric communications are powerful.

Keywords: metaphor, communication, advertising, cognition, action

Procedia PDF Downloads 78
489 Computational Fluid Dynamics Simulation of a Boiler Outlet Header Constructed of Inconel Alloy 740H

Authors: Sherman Ho, Ahmed Cherif Megri

Abstract:

Headers play a critical role in conveying steam to regulate heating system temperatures. While various materials like steel grades 91 and 92 have been traditionally used for pipes, this research proposes the use of a robust and innovative material, INCONEL Alloy 740H. Boilers in power plant configurations are exposed to cycling conditions due to factors such as daily, seasonal, and yearly variations in weather. These cycling conditions can lead to the deterioration of headers, which are vital components with intricate geometries. Header failures result in substantial financial losses from repair costs and power plant shutdowns, along with significant public inconveniences such as the loss of heating and hot water. To address this issue and seek solutions, a mechanical analysis, as well as a structural analysis, are recommended. Transient analysis to predict heat transfer conditions is of paramount importance, as the direction of heat transfer within the header walls and the passing steam can vary based on the location of interest, load, and operating conditions. The geometry and material of the header are also crucial design factors, and the choice of pipe material depends on its usage. In this context, the heat transfer coefficient plays a vital role in header design and analysis. This research employs ANSYS Fluent, a numerical simulation program, to understand header behavior, predict heat transfer, and analyze mechanical phenomena within the header. Transient simulations are conducted to investigate parameters like heat transfer coefficient, pressure loss coefficients, and heat flux, with the results used to optimize header design.

Keywords: CFD, header, power plant, heat transfer coefficient, simulation using experimental data

Procedia PDF Downloads 50
488 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint

Authors: Juliane Spaak

Abstract:

A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.

Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient

Procedia PDF Downloads 57
487 One-Dimensional Numerical Simulation of the Nonlinear Instability Behavior of an Electrified Viscoelastic Liquid Jet

Authors: Fang Li, Xie-Yuan Yin, Xie-Zhen Yin

Abstract:

Instability and breakup of electrified viscoelastic liquid jets are involved in various applications such as inkjet printing, fuel atomization, the pharmaceutical industry, electrospraying, and electrospinning. Studying on the instability of electrified viscoelastic liquid jets is of theoretical and practical significance. We built a one-dimensional electrified viscoelastic model to study the nonlinear instability behavior of a perfecting conducting, slightly viscoelastic liquid jet under a radial electric field. The model is solved numerically by using an implicit finite difference scheme together with a boundary element method. It is found that under a radial electric field a viscoelastic liquid jet still evolves into a beads-on-string structure with a thin filament connecting two adjacent droplets as in the absence of an electric field. A radial electric field exhibits limited influence on the decay of the filament thickness in the nonlinear evolution process of a viscoelastic jet, in contrast to its great enhancing effect on the linear instability of the jet. On the other hand, a radial electric field can induce axial non-uniformity of the first normal stress difference within the filament. Particularly, the magnitude of the first normal stress difference near the midpoint of the filament can be greatly decreased by a radial electric field. Decreasing the extensional stress by a radial electric field may found applications in spraying, spinning, liquid bridges and others. In addition, the effect of a radial electric field on the formation of satellite droplets is investigated on the parametric plane of the dimensionless wave number and the electrical Bond number. It is found that satellite droplets may be formed for a larger axial wave number at a larger radial electric field. The present study helps us gain insight into the nonlinear instability characteristics of electrified viscoelastic liquid jets.

Keywords: non linear instability, one-dimensional models, radial electric fields, viscoelastic liquid jets

Procedia PDF Downloads 374
486 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4

Authors: Ryan A. Black, Stacey A. McCaffrey

Abstract:

Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.

Keywords: instrument development, item response theory, latent trait theory, psychometrics

Procedia PDF Downloads 333
485 Environmental Accounting Practice: Analyzing the Extent and Qualification of Environmental Disclosures of Turkish Companies Located in BIST-XKURY Index

Authors: Raif Parlakkaya, Mustafa Nihat Demirci, Mehmet Nuri Salur

Abstract:

Environmental pollution has detrimental effects on the quality of our life and its scope has reached such an extent that measures are being taken both at the national and international levels to reduce, prevent and mitigate its impact on social, economic and political spheres. Therefore, awareness of environmental problems has been increasing among stakeholders and accordingly among companies. It is seen that corporate reporting is expanding beyond environmental performance. Primary purpose of publishing an environmental report is to provide specific audiences with useful, meaningful information. This paper is intended to analyze the extent and qualification of environmental disclosures of Turkish publicly quoted firms and see how it varies from one sector to another. The data for the study were collected from annual activity reports of companies, listed on the corporate governance index (BIST-XKURY) of Istanbul Stock Exchange. Content analysis was the research methodology used to measure the extent of environmental disclosure. Accordingly, 2015 annual activity reports of companies that carry out business in some particular fields were acquired from Capital Market Board, websites of Public Disclosure Platform and companies’ own websites. These reports were categorized into five main aspects: Environmental policies, environmental management systems, environmental protection and conservation activities, environmental awareness and information on environmental lawsuits. Subsequently, each component was divided into several variables related to what each firm is supposed to disclose about environmental information. In this context, the nature and scope of the information disclosed on each item were assessed according to five different ways (N.I: No Information; G.E.: General Explanations; Q.E.: Qualitative Detailed Explanations; N.E.: Quantitative (numerical) Detailed Explanations; Q.&N.E.: Both Qualitative and Quantitative Explanations).

Keywords: environmental accounting, disclosure, corporate governance, content analysis

Procedia PDF Downloads 241
484 Y-Y’ Calculus in Physical Sciences and Engineering with Particular Reference to Fundamentals of Soil Consolidation

Authors: Sudhir Kumar Tewatia, Kanishck Tewatia, Anttriksh Tewatia

Abstract:

Advancements in soil consolidation are discussed, and further improvements are proposed with particular reference to Tewatia’s Y-Y’ Approach, which is called the Settlement versus Rate of Settlement Approach in consolidation. A branch of calculus named Y-Y' (or y versus dy/dx) is suggested (as compared to the common X-Y', x versus dy/dx, dy/dx versus x or Newton-Leibniz branch) that solves some complicated/unsolved theoretical and practical problems in physical sciences (Physics, Chemistry, Mathematics, Biology, and allied sciences) and engineering in an amazingly simple and short manner, particularly when independent variable X is unknown and X-Y' Approach can’t be used. Complicated theoretical and practical problems in 1D, 2D, 3D Primary and Secondary consolidations with non-uniform gradual loading and irregularly shaped clays are solved with elementary school level Y-Y' Approach, and it is interesting to note that in X-Y' Approach, equations become more difficult while we move from one to three dimensions, but in Y-Y' Approach even 2D/3D equations are very simple to derive, solve, and use; rather easier sometimes. This branch of calculus will have a far-reaching impact on understanding and solving the problems in different fields of physical sciences and engineering that were hitherto unsolved or difficult to be solved by normal calculus/numerical/computer methods. Some particular cases from soil consolidation that basically creeps and diffusion equations in isolation and in combination with each other are taken for comparison with heat transfer. The Y-Y’ Approach can similarly be applied in wave equations and other fields wherever normal calculus works or fails. Soil mechanics uses mathematical analogies from other fields of physical sciences and engineering to solve theoretical and practical problems; for example, consolidation theory is a replica of the heat equation from thermodynamics with the addition of the effective stress principle. An attempt is made to give them mathematical analogies.

Keywords: calculus, clay, consolidation, creep, diffusion, heat, settlement

Procedia PDF Downloads 73
483 Bridge Damage Detection and Stiffness Reduction Using Vibration Data: Experimental Investigation on a Small Scale Steel Bridge

Authors: Mirco Tarozzi, Giacomo Pignagnoli, Andrea Benedetti

Abstract:

The design of planning maintenance of civil structures often requires the evaluation of their level of safety in order to be able to choose which structure, and in which measure, it needs a structural retrofit. This work deals with the evaluation of the stiffness reduction of a scaled steel deck due to the presence of localized damages. The dynamic tests performed on it have shown the variability of its main frequencies linked to the gradual reduction of its rigidity. This deck consists in a steel grillage of four secondary beams and three main beams linked to a concrete slab. This steel deck is 6 m long and 3 m wide and it rests on two abutments made of concrete. By processing the signals of the accelerations due to a random excitation of the deck, the main natural frequencies of this bridge have been extracted. In order to assign more reliable parameters to the numerical model of the deck, some load tests have been performed and the mechanical property of the materials and the supports have been obtained. The two external beams have been cut at one third of their length and the structural strength has been restored by the design of a bolted plate. The gradual loss of the bolts and the plates removal have made the simulation of localized damage possible. In order to define the relationship between frequency variation and loss in stiffness, the identification of its natural frequencies has been performed, before and after the occurrence of the damage, corresponding to each step. The study of the relationship between stiffness losses and frequency shifts has been reported in this paper: the square of the frequency variation due to the presence of the damage is proportional to the ratio between the rigidities. This relationship can be used to quantify the loss in stiffness of a real scale bridge in an efficient way.

Keywords: damage detection, dynamic test, frequency shifts, operational modal analysis, steel bridge

Procedia PDF Downloads 147
482 Numerical response of Coaxial HPGe Detector for Skull and Knee measurement

Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.

Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.

Procedia PDF Downloads 31
481 Improving Fingerprinting-Based Localization System Using Generative AI

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 34
480 Finite Element and Experimental Investigation on Vibration Analysis of Laminated Composite Plates

Authors: Azad Mohammed Ali Saber, Lanja Saeed Omer

Abstract:

The present study deals with numerical method (FE) and experimental investigations on the vibration behavior of carbon fiber-polyester laminated plates. Finite element simulation is done using APDL (Ansys Parametric Design Language) macro codes software version 19. Solid185 layered structural element, including eight nodes, is adopted in this analysis. The experimental work is carried out using (Hand Layup method) to fabricate different layers and orientation angles of composite laminate plates. Symmetric samples include four layers (00/900)s and six layers (00/900/00)s, (00/00/900)s. Antisymmetric samples include one layer (00), (450), two layers (00/900), (-450/450), three layers (00/900/00), four layers (00/900)2, (-450/450)2, five layers (00/900)2.5, and six layers (00/900)3, (-450/450)3. An experimental investigation is carried out using a modal analysis technique with a Fast Fourier Transform Analyzer (FFT), Pulse platform, impact hammer, and accelerometer to obtain the frequency response functions. The influences of different parameters such as the number of layers, aspect ratio, modulus ratio, ply orientation, and different boundary conditions on the dynamic behavior of the CFRPs are studied, where the 1st, 2nd, and 3rd natural frequencies are observed to be the minimum for cantilever boundary condition (CFFF) and the maximum for full clamped boundary condition (CCCC). Experimental results show that the natural frequencies of laminated plates are significantly reliant on the type of boundary conditions due to the restraint effect at the edges. Good agreement is achieved among the finite element and experimental results. All results indicate that any increase in aspect ratio causes a decrease in the natural frequency of the CFRPs plate, while any increase in the modulus ratio or number of layers causes an increase in the fundamental natural frequency of vibration.

Keywords: vibration, composite materials, finite element, APDL ANSYS

Procedia PDF Downloads 24
479 Estimation of Mobility Parameters and Threshold Voltage of an Organic Thin Film Transistor Using an Asymmetric Capacitive Test Structure

Authors: Rajesh Agarwal

Abstract:

Carrier mobility at the organic/insulator interface is essential to the performance of organic thin film transistors (OTFT). The present work describes estimation of field dependent mobility (FDM) parameters and the threshold voltage of an OTFT using a simple, easy to fabricate two terminal asymmetric capacitive test structure using admittance measurements. Conventionally, transfer characteristics are used to estimate the threshold voltage in an OTFT with field independent mobility (FIDM). Yet, this technique breaks down to give accurate results for devices with high contact resistance and having field dependent mobility. In this work, a new technique is presented for characterization of long channel organic capacitor (LCOC). The proposed technique helps in the accurate estimation of mobility enhancement factor (γ), the threshold voltage (V_th) and band mobility (µ₀) using capacitance-voltage (C-V) measurement in OTFT. This technique also helps to get rid of making short channel OTFT or metal-insulator-metal (MIM) structures for making C-V measurements. To understand the behavior of devices and ease of analysis, transmission line compact model is developed. The 2-D numerical simulation was carried out to illustrate the correctness of the model. Results show that proposed technique estimates device parameters accurately even in the presence of contact resistance and field dependent mobility. Pentacene/Poly (4-vinyl phenol) based top contact bottom-gate OTFT’s are fabricated to illustrate the operation and advantages of the proposed technique. Small signal of frequency varying from 1 kHz to 5 kHz and gate potential ranging from +40 V to -40 V have been applied to the devices for measurement.

Keywords: capacitance, mobility, organic, thin film transistor

Procedia PDF Downloads 150
478 A One-Dimensional Modeling Analysis of the Influence of Swirl and Tumble Coefficient in a Single-Cylinder Research Engine

Authors: Mateus Silva Mendonça, Wender Pereira de Oliveira, Gabriel Heleno de Paula Araújo, Hiago Tenório Teixeira Santana Rocha, Augusto César Teixeira Malaquias, José Guilherme Coelho Baeta

Abstract:

The stricter legislation and the greater demand of the population regard to gas emissions and their effects on the environment as well as on human health make the automotive industry reinforce research focused on reducing levels of contamination. This reduction can be achieved through the implementation of improvements in internal combustion engines in such a way that they promote the reduction of both specific fuel consumption and air pollutant emissions. These improvements can be obtained through numerical simulation, which is a technique that works together with experimental tests. The aim of this paper is to build, with support of the GT-Suite software, a one-dimensional model of a single-cylinder research engine to analyze the impact of the variation of swirl and tumble coefficients on the performance and on the air pollutant emissions of an engine. Initially, the discharge coefficient is calculated through the software Converge CFD 3D, given that it is an input parameter in GT-Power. Mesh sensitivity tests are made in 3D geometry built for this purpose, using the mass flow rate in the valve as a reference. In the one-dimensional simulation is adopted the non-predictive combustion model called Three Pressure Analysis (TPA) is, and then data such as mass trapped in cylinder, heat release rate, and accumulated released energy are calculated, aiming that the validation can be performed by comparing these data with those obtained experimentally. Finally, the swirl and tumble coefficients are introduced in their corresponding objects so that their influences can be observed when compared to the results obtained previously.

Keywords: 1D simulation, single-cylinder research engine, swirl coefficient, three pressure analysis, tumble coefficient

Procedia PDF Downloads 85
477 Numerical Investigation into Capture Efficiency of Fibrous Filters

Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard

Abstract:

Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.

Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory

Procedia PDF Downloads 188
476 Vibration Analysis of FGM Sandwich Panel with Cut-Outs Using Refined Higher-Order Shear Deformation Theory (HSDT) Based on Isogeometric Analysis

Authors: Lokanath Barik, Abinash Kumar Swain

Abstract:

This paper presents vibration analysis of FGM sandwich structure with a complex profile governed by refined higher-order shear deformation theory (RHSDT) using isogeometric analysis (IGA). Functionally graded sandwich plates provide a wide range of applications in aerospace, defence, and aircraft industries due to their ability to distribute material functions to influence the thermo-mechanical properties as desired. In practical applications, these structures generally have intrinsic profiles, and their response to loads is significantly affected due to cut-outs. IGA is primarily a NURBS-based technique that is effective in solving higher-order differential equations due to its inherent C1 continuity imposition in solution space for a single patch. Complex structures generally require multiple patches to accurately represent the geometry, and hence, there is a loss of continuity at adjoining patch junctions. Therefore, patch coupling is desired to maintain continuity requirements throughout the domain. In this work, a novel strong coupling approach is provided that generates a well-defined NURBS-based model while achieving continuity. The methodology is validated by free vibration analysis of sandwich plates with present literature. The results are in good agreement with the analytical solution for different plate configurations and power law indexes. Numerical examples of rectangular and annular plates are discussed with variable boundary conditions. Additionally, parametric studies are provided by varying the aspect ratio, porosity ratio and their influence on the natural frequency of the plate.

Keywords: vibration analysis, FGM sandwich structure, multipatch geometry, patch coupling, IGA

Procedia PDF Downloads 56
475 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems

Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic

Abstract:

Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.

Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method

Procedia PDF Downloads 112
474 Calculating Asphaltenes Precipitation Onset Pressure by Using Cardanol as Precipitation Inhibitor: A Strategy to Increment the Oil Well Production

Authors: Camilo A. Guerrero-Martin, Erik Montes Paez, Marcia C. K. Oliveira, Jonathan Campos, Elizabete F. Lucas

Abstract:

Asphaltenes precipitation is considered as a formation damage problem, which can reduce the oil recovery factor. It fouls piping and surface installations, as well as cause serious flow assurance complications and decline oil well production. Therefore, researchers have shown an interest in chemical treatments to control this phenomenon. The aim of this paper is to assess the asphaltenes precipitation onset of crude oils in the presence of cardanol, by titrating the crude with n-heptane. Moreover, based on this results obtained at atmosphere pressure, the asphaltenes precipitation onset pressure were calculated to predict asphaltenes precipitation in the reservoir, by using differential liberation and refractive index data of the oils. The influence of cardanol concentrations in the asphaltenes stabilization of three Brazilian crude oils samples (with similar API densities) was studied. Therefore, four formulations of cardanol in toluene were prepared: 0, 3, 5, 10 and 15 m/m%. The formulations were added to the crude at 2:98 ratio. The petroleum samples were characterized by API density, elemental analysis and differential liberation test. The asphaltenes precipitation onset (APO) was determined by titrating with n-heptane and monitoring with near-infrared (NIR). UV-Vis spectroscopy experiments were also done to assess the precipitate asphaltenes content. The asphaltenes precipitation envelopes (APE) were also determined by numerical simulation (Multiflash). In addition, the adequate artificial lift systems (ALS) for the oils were selected. It was based on the downhole well profile and a screening methodology. Finally, the oil flowrates were modelling by NODAL analysis production system in the PIPESIM software. The results of this study show that the asphaltenes precipitation onset of the crude oils were 2.2, 2.3 and 6.0 mL of n-heptane/g of oil. The cardanol was an effective inhibitor of asphaltenes precipitation for the crude oils used in this study, since it displaces the precipitation pressure of the oil to lower values. This indicates that cardanol can increase the oil wells productivity.

Keywords: asphaltenes, NODAL analysis production system, precipitation pressure onset, inhibitory molecule

Procedia PDF Downloads 158
473 Comparison between the Performances of Different Boring Bars in the Internal Turning of Long Overhangs

Authors: Wallyson Thomas, Zsombor Fulop, Attila Szilagyi

Abstract:

Impact dampers are mainly used in the metal-mechanical industry in operations that generate too much vibration in the machining system. Internal turning processes become unstable during the machining of deep holes, in which the tool holder is used with long overhangs (high length-to-diameter ratios). The devices coupled with active dampers, are expensive and require the use of advanced electronics. On the other hand, passive impact dampers (PID – Particle Impact Dampers) are cheaper alternatives that are easier to adapt to the machine’s fixation system, once that, in this last case, a cavity filled with particles is simply added to the structure of the tool holder. The cavity dimensions and the diameter of the spheres are pre-determined. Thus, when passive dampers are employed during the machining process, the vibration is transferred from the tip of the tool to the structure of the boring bar, where it is absorbed by the fixation system. This work proposes to compare the behaviors of a conventional solid boring bar and a boring bar with a passive impact damper in turning while using the highest possible L/D (length-to-diameter ratio) of the tool and an Easy Fix fixation system (also called: Split Bushing Holding System). It is also intended to optimize the impact absorption parameters, as the filling percentage of the cavity and the diameter of the spheres. The test specimens were made of hardened material and machined in a Computer Numerical Control (CNC) lathe. The laboratory tests showed that when the cavity of the boring bar is totally filled with minimally spaced spheres of the largest diameter, the gain in absorption allowed of obtaining, with an L/D equal to 6, the same surface roughness obtained when using the solid boring bar with an L/D equal to 3.4. The use of the passive particle impact damper resulted in, therefore, increased static stiffness and reduced deflexion of the tool.

Keywords: active damper, fixation system, hardened material, passive damper

Procedia PDF Downloads 194
472 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 291
471 Evaluation of the Impact of Community Based Disaster Risk Management Applied In Landslide Prone Area; Reference to Badulla District

Authors: S. B. D. Samarasinghe, Malini Herath

Abstract:

Participatory planning is a very important process for decision making and choosing the best alternative options for community welfare, development of the society and its interactions among community and professionals. People’s involvement is considered as the key guidance in participatory planning. Presently, Participatory planning is being used in many fields. It's not only limited to planning but also to disaster management, poverty, housing, etc. In the past, Disaster management practice was a top-down approach, but it raised many issues as it was converted to a bottom-up approach. There are several approaches that can aid disaster management. Community-Based Disaster Risk Management (CBDRM) is a very successful participatory approach to risk management that is often successfully applied by other disaster-prone countries. In the local context, CBDRM has been applied to prevent Diseases as well as to prevent disasters such as landslides, tsunamis and floods. From three years before, Sri Lanka has initiated the CBDRM approach to minimize landslide vulnerability. Hence, this study mainly focuses on the impact of CBDRM approaches on landslide hazards. Also to identify their successes and failures from both implementing parties and community. This research is carried out based on a qualitative method combined with a descriptive research approach. A successful framework was prepared via a literature review. Case studies were selected considering landslide CBDRM programs which were implemented by Disaster Management Center and National Building Research Organization in Badulla. Their processes were evaluated. Data collection is done through interviews and informal discussions. Then their ideas were quantified by using the Relative Effectiveness index. The resulting numerical value was used to rank the program effectiveness and their success, failures and impacting factors. Results show that there are several failures among implementing parties and the community. Overcoming those factors can make way for better conduction of future CBDRM programs.

Keywords: community-based disaster risk management, disaster management, preparedness, landslide

Procedia PDF Downloads 124
470 Sentiment Analysis on University Students’ Evaluation of Teaching and Their Emotional Engagement

Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís

Abstract:

Teaching practices have been widely studied in relation to students' outcomes, positioning themselves as one of their strongest catalysts and influencing students' emotional experiences. In the higher education context, teachers become even more crucial as many students ground their decisions on which courses to enroll in based on opinions and ratings of teachers from other students. Unfortunately, sometimes universities do not provide the personal, social, and academic stimulation students demand to be actively engaged. To evaluate their teachers, universities often rely on students' evaluations of teaching (SET) collected via Likert scale surveys. Despite its usefulness, such a method has been questioned in terms of validity and reliability. Alternatively, researchers can rely on qualitative answers to open-ended questions. However, the unstructured nature of the answers and a large amount of information obtained requires an overwhelming amount of work. The present work presents an alternative approach to analyse such data: sentiment analysis. To the best of our knowledge, no research before has included results from SA into an explanatory model to test how students' sentiments affect their emotional engagement in class. The sample of the present study included a total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) from the Educational Sciences faculty of a public university in Spain. Data collection took place during the academic year 2021-2022. Students accessed an online questionnaire using a QR code. They were asked to answer the following open-ended question: "If you had to explain to a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?". Sentiment analysis was performed using Microsoft's pre-trained model. The reliability of the measure was estimated between the tool and one of the researchers who coded all answers independently. The Cohen's kappa and the average pairwise percent agreement were estimated with ReCal2. Cohen's kappa was .68, and the agreement reached was 90.8%, both considered satisfactory. To test the hypothesis relations among SA and students' emotional engagement, a structural equation model (SEM) was estimated. Results demonstrated a good fit of the data: RMSEA = .04, SRMR = .03, TLI = .99, CFI = .99. Specifically, the results showed that student’s sentiment regarding their teachers’ teaching positively predicted their emotional engagement (β == .16 [.02, -.30]). In other words, when students' opinion toward their instructors' teaching practices is positive, it is more likely for students to engage emotionally in the subject. Altogether, the results show a promising future for sentiment analysis techniques in the field of education. They suggest the usefulness of this tool when evaluating relations among teaching practices and student outcomes.

Keywords: sentiment analysis, students' evaluation of teaching, structural-equation modelling, emotional engagement

Procedia PDF Downloads 65
469 A Crystallization Kinetic Model for Long Fiber-Based Composite with Thermoplastic Semicrystalline Polymer Matrix

Authors: Nicolas Bigot, M'hamed Boutaous, Nahiene Hamila, Shihe Xin

Abstract:

Composite materials with polymer matrices are widely used in most industrial areas, particularly in aeronautical and automotive ones. Thanks to the development of a high-performance thermoplastic semicrystalline polymer matrix, those materials exhibit more and more efficient properties. The polymer matrix in composite materials can manifest a specific crystalline structure characteristic of crystallization in a fibrous medium. In order to guarantee a good mechanical behavior of structures and to optimize their performances, it is necessary to define realistic mechanical constitutive laws of such materials considering their physical structure. The interaction between fibers and matrix is a key factor in the mechanical behavior of composite materials. Transcrystallization phenomena which develops in the matrix around the fibers constitute the interphase which greatly affects and governs the nature of the fiber-matrix interaction. Hence, it becomes fundamental to quantify its impact on the thermo-mechanical behavior of composites material in relationship with processing conditions. In this work, we propose a numerical model coupling the thermal and crystallization kinetics in long fiber-based composite materials, considering both the spherulitic and transcrystalline types of the induced structures. After validation of the model with comparison to results from the literature and noticing a good correlation, a parametric study has been led on the effects of the thermal kinetics, the fibers volume fractions, the deformation, and the pressure on the crystallization rate in the material, under processing conditions. The ratio of the transcrystallinity is highlighted and analyzed with regard to the thermal kinetics and gradients in the material. Experimental results on the process are foreseen and pave the way to establish a mechanical constitutive law describing, with the introduction of the role on the crystallization rates and types on the thermo-mechanical behavior of composites materials.

Keywords: composite materials, crystallization, heat transfer, modeling, transcrystallization

Procedia PDF Downloads 179
468 Hydrodynamic Analysis of Fish Fin Kinematics of Oreochromis Niloticus Using Machine Learning and Image Processing

Authors: Paramvir Singh

Abstract:

The locomotion of aquatic organisms has long fascinated biologists and engineers alike, with fish fins serving as a prime example of nature's remarkable adaptations for efficient underwater propulsion. This paper presents a comprehensive study focused on the hydrodynamic analysis of fish fin kinematics, employing an innovative approach that combines machine learning and image processing techniques. Through high-speed videography and advanced computational tools, we gain insights into the complex and dynamic motion of the fins of a Tilapia (Oreochromis Niloticus) fish. This study was initially done by experimentally capturing videos of the various motions of a Tilapia in a custom-made setup. Using deep learning and image processing on the videos, the motion of the Caudal and Pectoral fin was extracted. This motion included the fin configuration (i.e., the angle of deviation from the mean position) with respect to time. Numerical investigations for the flapping fins are then performed using a Computational Fluid Dynamics (CFD) solver. 3D models of the fins were created, mimicking the real-life geometry of the fins. Thrust Characteristics of separate fins (i.e., Caudal and Pectoral separately) and when the fins are together were studied. The relationship and the phase between caudal and pectoral fin motion were also discussed. The key objectives include mathematical modeling of the motion of a flapping fin at different naturally occurring frequencies and amplitudes. The interactions between both fins (caudal and pectoral) were also an area of keen interest. This work aims to improve on research that has been done in the past on similar topics. Also, these results can help in the better and more efficient design of the propulsion systems for biomimetic underwater vehicles that are used to study aquatic ecosystems, explore uncharted or challenging underwater regions, do ocean bed modeling, etc.

Keywords: biomimetics, fish fin kinematics, image processing, fish tracking, underwater vehicles

Procedia PDF Downloads 64
467 Effects of Soil Neutron Irradiation in Soil Carbon Neutron Gamma Analysis

Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert

Abstract:

The carbon sequestration question of modern times requires the development of an in-situ method of measuring soil carbon over large landmasses. Traditional chemical analytical methods used to evaluate large land areas require extensive soil sampling prior to processing for laboratory analysis; collectively, this is labor-intensive and time-consuming. An alternative method is to apply nuclear physics analysis, primarily in the form of pulsed fast-thermal neutron-gamma soil carbon analysis. This method is based on measuring the gamma-ray response that appears upon neutron irradiation of soil. Specific gamma lines with energies of 4.438 MeV appearing from neutron irradiation can be attributed to soil carbon nuclei. Based on measuring gamma line intensity, assessments of soil carbon concentration can be made. This method can be done directly in the field using a specially developed pulsed fast-thermal neutron-gamma system (PFTNA system). This system conducts in-situ analysis in a scanning mode coupled with GPS, which provides soil carbon concentration and distribution over large fields. The system has radiation shielding to minimize the dose rate (within radiation safety guidelines) for safe operator usage. Questions concerning the effect of neutron irradiation on soil health will be addressed. Information regarding absorbed neutron and gamma dose received by soil and its distribution with depth will be discussed in this study. This information was generated based on Monte-Carlo simulations (MCNP6.2 code) of neutron and gamma propagation in soil. Received data were used for the analysis of possible induced irradiation effects. The physical, chemical and biological effects of neutron soil irradiation were considered. From a physical aspect, we considered neutron (produced by the PFTNA system) induction of new isotopes and estimated the possibility of increasing the post-irradiation gamma background by comparisons to the natural background. An insignificant increase in gamma background appeared immediately after irradiation but returned to original values after several minutes due to the decay of short-lived new isotopes. From a chemical aspect, possible radiolysis of water (presented in soil) was considered. Based on stimulations of radiolysis of water, we concluded that the gamma dose rate used cannot produce gamma rays of notable rates. Possible effects of neutron irradiation (by the PFTNA system) on soil biota were also assessed experimentally. No notable changes were noted at the taxonomic level, nor was functional soil diversity affected. Our assessment suggested that the use of a PFTNA system with a neutron flux of 1e7 n/s for soil carbon analysis does not notably affect soil properties or soil health.

Keywords: carbon sequestration, neutron gamma analysis, radiation effect on soil, Monte-Carlo simulation

Procedia PDF Downloads 116
466 Numerical Study of a Ventilation Principle Based on Flow Pulsations

Authors: Amir Sattari, Mac Panah, Naeim Rashidfarokhi

Abstract:

To enhance the mixing of fluid in a rectangular enclosure with a circular inlet and outlet, an energy-efficient approach is further investigated through computational fluid dynamics (CFD). Particle image velocimetry (PIV) measurements help confirm that the pulsation of the inflow velocity improves the mixing performance inside the enclosure considerably without increasing energy consumption. In this study, multiple CFD simulations with different turbulent models were performed. The results obtained were compared with experimental PIV results. This study investigates small-scale representations of flow patterns in a ventilated rectangular room. The objective is to validate the concept of an energy-efficient ventilation strategy with improved thermal comfort and reduction of stagnant air inside the room. Experimental and simulated results confirm that through pulsation of the inflow velocity, strong secondary vortices are generated downstream of the entrance wall-jet. The pulsatile inflow profile promotes a periodic generation of vortices with stronger eddies despite a relatively low inlet velocity, which leads to a larger boundary layer with increased kinetic energy in the occupied zone. A real-scale study was not conducted; however, it can be concluded that a constant velocity inflow profile can be replaced with a lower pulsated flow rate profile while preserving the mixing efficiency. Among the turbulent CFD models demonstrated in this study, SST-kω is most advantageous, exhibiting a similar global airflow pattern as in the experiments. The detailed near-wall velocity profile is utilized to identify the wall-jet instabilities that consist of mixing and boundary layers. The SAS method was later applied to predict the turbulent parameters in the center of the domain. In both cases, the predictions are in good agreement with the measured results.

Keywords: CFD, PIV, pulsatile inflow, ventilation, wall-jet

Procedia PDF Downloads 161
465 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 363
464 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column

Authors: G. Rajapakse, S. Jayasinghe, A. Fleming

Abstract:

This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.

Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter

Procedia PDF Downloads 100
463 Boredom in the Classroom: Sentiment Analysis on Teaching Practices and Related Outcomes

Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís

Abstract:

Students’ emotional experiences have been a widely discussed theme among researchers, proving a central role on students’ outcomes. Yet, up to now, far too little attention has been paid to teaching practices that negatively relate with students’ negative emotions in the higher education. The present work aims to examine the relationship between teachers’ teaching practices (i.e., students’ evaluations of teaching and autonomy support), the students’ feelings of boredom and agentic engagement and motivation in the higher education context. To do so, the present study incorporates one of the most popular tools in natural processing language to address students’ evaluations of teaching: sentiment analysis. Whereas most research has focused on the creation of SA models and assessing students’ satisfaction regarding teachers and courses to the author’s best knowledge, no research before has included results from SA into an explanatory model. A total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) participated in the study. Students were enrolled in degree and masters’ studies at the faculty of Education of a public university of Spain. Data was collected using an online questionnaire students could access through a QR code they completed during a teaching period where the assessed teacher was not present. To assess students’ sentiments towards their teachers’ teaching, we asked them the following open-ended question: “If you had to explain a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?”. Sentiment analysis was performed with Microsoft's pre-trained model. For this study, we relied on the probability of the students answer belonging to the negative category. To assess the reliability of the measure, inter-rater agreement between this NLP tool and one of the researchers, who independently coded all answers, was examined. The average pairwise percent agreement and the Cohen’s kappa were calculated with ReCal2. The agreement reached was of 90.8% and Cohen’s kappa .68, both considered satisfactory. To test the hypothesis relations a structural equation model (SEM) was estimated. Results showed that the model fit indices displayed a good fit to the data; χ² (134) = 351.129, p < .001, RMSEA = .07, SRMR = .09, TLI = .91, CFI = .92. Specifically, results show that boredom was negatively predicted by autonomy support practices (β = -.47[-.61, -.33]), whereas for the negative sentiment extracted from SET, this relation was positive (β = .23[.16, .30]). In other words, when students’ opinion towards their instructors’ teaching practices was negative, it was more likely for them to feel bored. Regarding the relations among boredom and student outcomes, results showed a negative predictive value of boredom on students’ motivation to study (β = -.46[-.63, -.29]) and agentic engagement (β = -.24[-.33, -.15]). Altogether, results show a promising future for sentiment analysis techniques in the field of education as they proved the usefulness of this tool when evaluating relations among teaching practices and student outcomes.

Keywords: sentiment analysis, boredom, motivation, agentic engagement

Procedia PDF Downloads 78
462 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel

Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi

Abstract:

The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.

Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point

Procedia PDF Downloads 87
461 Doctor-Patient Interaction in an L2: Pragmatic Study of a Nigerian Experience

Authors: Ayodele James Akinola

Abstract:

This study investigated the use of English in doctor-patient interaction in a university teaching hospital from a southwestern state in Nigeria with the aim of identifying the role of communication in an L2, patterns of communication, discourse strategies, pragmatic acts, and contexts that shape the interaction. Jacob Mey’s Pragmatic Acts notion complemented with Emanuel and Emanuel’s model of doctor-patient relationship provided the theoretical standpoint. Data comprising 7 audio-recorded doctors-patient interactions were collected from a University Hospital in Oyo state, Nigeria. Interactions involving the use of English language were purposefully selected. These were supplemented with patients’ case notes and interviews conducted with doctors. Transcription was patterned alongside modified Arminen’s notations of conversation analysis. In the study, interaction in English between doctor and patients has the preponderance of direct-translation, code-mixing and switching, Nigerianism and use of cultural worldviews to express medical experience. Irrespective of these, three patterns communication, namely the paternalistic, interpretive, and deliberative were identified. These were exhibited through varying discourse strategies. The paternalistic model reflected slightly casual conversational conventions and registers. These were achieved through the pragmemic activities of situated speech acts, psychological and physical acts, via patients’ quarrel-induced acts, controlled and managed through doctors’ shared situation knowledge. All these produced empathising, pacifying, promising and instructing practs. The patients’ practs were explaining, provoking, associating and greeting in the paternalistic model. The informative model reveals the use of adjacency pairs, formal turn-taking, precise detailing, institutional talks and dialogic strategies. Through the activities of the speech, prosody and physical acts, the practs of declaring, alerting and informing were utilised by doctors, while the patients exploited adapting, requesting and selecting practs. The negotiating conversational strategy of the deliberative model featured in the speech, prosody and physical acts. In this model, practs of suggesting, teaching, persuading and convincing were utilised by the doctors. The patients deployed the practs of questioning, demanding, considering and deciding. The contextual variables revealed that other patterns (such as phatic and informative) are also used and they coalesced in the hospital within the situational and psychological contexts. However, the paternalistic model was predominantly employed by doctors with over six years in practice, while the interpretive, informative and deliberative models were found among registrar and others below six years of medical practice. Doctors’ experience, patients’ peculiarities and shared cultural knowledge influenced doctor-patient communication in the study.

Keywords: pragmatics, communication pattern, doctor-patient interaction, Nigerian hospital situation

Procedia PDF Downloads 164