Search results for: computational model(s)
7070 Analysis of Pressure Drop in a Concentrated Solar Collector with Direct Steam Production
Authors: Sara Sallam, Mohamed Taqi, Naoual Belouaggadia
Abstract:
Solar thermal power plants using parabolic trough collectors (PTC) are currently a powerful technology for generating electricity. Most of these solar power plants use thermal oils as heat transfer fluid. The latter is heated in the solar field and transfers the heat absorbed in an oil-water heat exchanger for the production of steam driving the turbines of the power plant. Currently, we are seeking to develop PTCs with direct steam generation (DSG). This process consists of circulating water under pressure in the receiver tube to generate steam directly into the solar loop. This makes it possible to reduce the investment and maintenance costs of the PTCs (the oil-water exchangers are removed) and to avoid the environmental risks associated with the use of thermal oils. The pressure drops in these systems are an important parameter to ensure their proper operation. The determination of these losses is complex because of the presence of the two phases, and most often we limit ourselves to describing them by models using empirical correlations. A comparison of these models with experimental data was performed. Our calculations focused on the evolution of the pressure of the liquid-vapor mixture along the receiver tube of a PTC-DSG for pressure values and inlet flow rates ranging respectively from 3 to 10 MPa, and from 0.4 to 0.6 kg/s. The comparison of the numerical results with experience allows us to demonstrate the validity of some models according to the pressures and the flow rates of entry in the PTC-DSG receiver tube. The analysis of these two parameters’ effects on the evolution of the pressure along the receiving tub, shows that the increase of the inlet pressure and the decrease of the flow rate lead to minimal pressure losses.Keywords: direct steam generation, parabolic trough collectors, Ppressure drop, empirical models
Procedia PDF Downloads 1387069 A Systems Approach to Modelling Emergent Behaviour in Maritime Control Systems Using the Composition, Environment, Structure, and Mechanisms Metamodel
Authors: Odd Ivar Haugen
Abstract:
Society increasingly relies on complex systems whose behaviour is determined, not by the properties of each part, but by the interaction between them. The behaviour of such systems is emergent. Modelling emergent system behaviour requires a systems approach that incorporates the necessary concepts that are capable of determining such behaviour. The CESM metamodel is a model of system models. A set of system models needs to address the elements of CESM at different levels of abstraction to be able to model the behaviour of a complex system. Modern ships contain numerous sophisticated equipment, often accompanied by a local safety system to protect its integrity. These control systems are then connected into a larger integrated system in order to achieve the ship’s objective or mission. The integrated system becomes what is commonly known as a system of systems, which can be termed a complex system. Examples of such complex systems are the dynamic positioning system and the power management system. Three ship accidents are provided as examples of how system complexity may contribute to accidents. Then, the three accidents are discussed in terms of how the Multi-Level/Multi-Model Safety Analysis might catch scenarios such as those leading to the accidents described.Keywords: emergent properties, CESM metamodel, multi-level/multi-model safety analysis, safety, system complexity, system models, systems thinking
Procedia PDF Downloads 27068 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming
Authors: William Chung
Abstract:
This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems
Procedia PDF Downloads 1647067 Fine-Tuned Transformers for Translating Multi-Dialect Texts to Modern Standard Arabic
Authors: Tahar Alimi, Rahma Boujebane, Wiem Derouich, Lamia Hadrich Belguith
Abstract:
Machine translation task of low-resourced languages such as Arabic is a challenging task. Despite the appearance of sophisticated models based on the latest deep learning techniques, namely the transfer learning and transformers, all models prove incapable of carrying out an acceptable translation, which includes Arabic Dialects (AD), because they do not have official status. In this paper, we present a machine translation model designed to translate Arabic multidialectal content into Modern Standard Arabic (MSA), leveraging both new and existing parallel resources. The latter achieved the best results for both Levantine and Maghrebi dialects with a BLEU score of 64.99.Keywords: Arabic translation, dialect translation, fine-tune, MSA translation, transformer, translation
Procedia PDF Downloads 587066 Supplemental VisCo-friction Damping for Dynamical Structural Systems
Authors: Sharad Singh, Ajay Kumar Sinha
Abstract:
Coupled dampers like viscoelastic-frictional dampers for supplemental damping are a newer technique. In this paper, innovative Visco-frictional damping models have been presented and investigated. This paper attempts to couple frictional and fluid viscous dampers into a single unit of supplemental dampers. Visco-frictional damping model is developed by series and parallel coupling of frictional and fluid viscous dampers using Maxwell and Kelvin-Voigat models. The time analysis has been performed using numerical simulation on an SDOF system with varying fundamental periods, subject to a set of 12 ground motions. The simulation was performed using the direct time integration method. MATLAB programming tool was used to carry out the numerical simulation. The response behavior has been analyzed for the varying time period and added damping. This paper compares the response reduction behavior of the two modes of coupling. This paper highlights the performance efficiency of the suggested damping models. It also presents a mathematical modeling approach to visco-frictional dampers and simultaneously suggests the suitable mode of coupling between the two sub-units.Keywords: hysteretic damping, Kelvin model, Maxwell model, parallel coupling, series coupling, viscous damping
Procedia PDF Downloads 1577065 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics
Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí
Abstract:
A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding
Procedia PDF Downloads 947064 Parkinson's Disease Gene Identification Using Physicochemical Properties of Amino Acids
Authors: Priya Arora, Ashutosh Mishra
Abstract:
Gene identification, towards the pursuit of mutated genes, leading to Parkinson’s disease, puts forward a challenge towards proactive cure of the disorder itself. Computational analysis is an effective technique for exploring genes in the form of protein sequences, as the theoretical and manual analysis is infeasible. The limitations and effectiveness of a particular computational method are entirely dependent on the previous data that is available for disease identification. The article presents a sequence-based classification method for the identification of genes responsible for Parkinson’s disease. During the initiation phase, the physicochemical properties of amino acids transform protein sequences into a feature vector. The second phase of the method employs Jaccard distances to select negative genes from the candidate population. The third phase involves artificial neural networks for making final predictions. The proposed approach is compared with the state of art methods on the basis of F-measure. The results confirm and estimate the efficiency of the method.Keywords: disease gene identification, Parkinson’s disease, physicochemical properties of amino acid, protein sequences
Procedia PDF Downloads 1397063 Mechanistic Modelling to De-risk Process Scale-up
Authors: Edwin Cartledge, Jack Clark, Mazaher Molaei-Chalchooghi
Abstract:
The mixing in the crystallization step of active pharmaceutical ingredient manufacturers was studied via advanced modeling tools to enable a successful scale-up. A virtual representation of the vessel was created, and computational fluid dynamics were used to simulate multiphase flow and, thus, the mixing environment within this vessel. The study identified a significant dead zone in the vessel underneath the impeller and found that increasing the impeller speed and power did not improve the mixing. A series of sensitivity analyses found that to improve mixing, the vessel had to be redesigned, and found that optimal mixing could be obtained by adding two extra cylindrical baffles. The same two baffles from the simulated environment were then constructed and added to the process vessel. By identifying these potential issues before starting the manufacture and modifying the vessel to ensure good mixing, this study mitigated a failed crystallization and potential batch disposal, which could have resulted in a significant loss of high-value material.Keywords: active pharmaceutical ingredient, baffles, computational fluid dynamics, mixing, modelling
Procedia PDF Downloads 947062 Quasi–Periodicity of Tonic Intervals in Octave and Innovation of Themes in Music Compositions
Authors: R. C. Tyagi
Abstract:
Quasi-periodicity of frequency intervals observed in Shruti based Absolute Scale of Music has been used to graphically identify the Anchor notes ‘Vadi’ and ‘Samvadi’ which are nodal points for expansion, elaboration and iteration of the emotional theme represented by the characteristic tonic arrangement in Raga compositions. This analysis leads to defining the Tonic parameters in the octave including the key-note frequency, tonic intervals’ anchor notes and the on-set and range of quasi-periodicities as exponents of 2. Such uniformity of representation of characteristic data would facilitate computational analysis and synthesis of music compositions and also help develop noise suppression techniques. Criteria for tuning of strings for compatibility with placement of frets on finger boards is discussed. Natural Rhythmic cycles in music compositions are analytically shown to lie between 3 and 126 beats.Keywords: absolute scale, anchor notes, computational analysis, frets, innovation, noise suppression, Quasi-periodicity, rhythmic cycle, tonic interval, Shruti
Procedia PDF Downloads 3037061 Building an Opinion Dynamics Model from Experimental Data
Authors: Dino Carpentras, Paul J. Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is a sub-field of agent-based modeling that focuses on people’s opinions and their evolutions over time. Despite the rapid increase in the number of publications in this field, it is still not clear how to apply these models to real-world scenarios. Indeed, there is no agreement on how people update their opinion while interacting. Furthermore, it is not clear if different topics will show the same dynamics (e.g., more polarized topics may behave differently). These problems are mostly due to the lack of experimental validation of the models. Some previous studies started bridging this gap in the literature by directly measuring people’s opinions before and after the interaction. However, these experiments force people to express their opinion as a number instead of using natural language (and then, eventually, encoding it as numbers). This is not the way people normally interact, and it may strongly alter the measured dynamics. Another limitation of these studies is that they usually average all the topics together, without checking if different topics may show different dynamics. In our work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions in natural language (“agree” or “disagree”). We also measured the certainty of their answer, expressed as a number between 1 and 10. However, this value was not shown to other participants to keep the interaction based on natural language. We then showed the opinion (and not the certainty) of another participant and, after a distraction task, we repeated the measurement. To make the data compatible with opinion dynamics models, we multiplied opinion and certainty to obtain a new parameter (here called “continuous opinion”) ranging from -10 to +10 (using agree=1 and disagree=-1). We firstly checked the 5 topics individually, finding that all of them behaved in a similar way despite having different initial opinions distributions. This suggested that the same model could be applied for different unpolarized topics. We also observed that people tend to maintain similar levels of certainty, even when they changed their opinion. This is a strong violation of what is suggested from common models, where people starting at, for example, +8, will first move towards 0 instead of directly jumping to -8. We also observed social influence, meaning that people exposed with “agree” were more likely to move to higher levels of continuous opinion, while people exposed with “disagree” were more likely to move to lower levels. However, we also observed that the effect of influence was smaller than the effect of random fluctuations. Also, this configuration is different from standard models, where noise, when present, is usually much smaller than the effect of social influence. Starting from this, we built an opinion dynamics model that explains more than 80% of data variance. This model was also able to show the natural conversion of polarization from unpolarized states. This experimental approach offers a new way to build models grounded on experimental data. Furthermore, the model offers new insight into the fundamental terms of opinion dynamics models.Keywords: experimental validation, micro-dynamics rule, opinion dynamics, update rule
Procedia PDF Downloads 1087060 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 3097059 Discourse Analysis: Where Cognition Meets Communication
Authors: Iryna Biskub
Abstract:
The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.Keywords: cognition, communication, discourse, strategy
Procedia PDF Downloads 2527058 Prediction of Malawi Rainfall from Global Sea Surface Temperature Using a Simple Multiple Regression Model
Authors: Chisomo Patrick Kumbuyo, Katsuyuki Shimizu, Hiroshi Yasuda, Yoshinobu Kitamura
Abstract:
This study deals with a way of predicting Malawi rainfall from global sea surface temperature (SST) using a simple multiple regression model. Monthly rainfall data from nine stations in Malawi grouped into two zones on the basis of inter-station rainfall correlations were used in the study. Zone 1 consisted of Karonga and Nkhatabay stations, located in northern Malawi; and Zone 2 consisted of Bolero, located in northern Malawi; Kasungu, Dedza, Salima, located in central Malawi; Mangochi, Makoka and Ngabu stations located in southern Malawi. Links between Malawi rainfall and SST based on statistical correlations were evaluated and significant results selected as predictors for the regression models. The predictors for Zone 1 model were identified from the Atlantic, Indian and Pacific oceans while those for Zone 2 were identified from the Pacific Ocean. The correlation between the fit of predicted and observed rainfall values of the models were satisfactory with r=0.81 and 0.54 for Zone 1 and 2 respectively (significant at less than 99.99%). The results of the models are in agreement with other findings that suggest that SST anomalies in the Atlantic, Indian and Pacific oceans have an influence on the rainfall patterns of Southern Africa.Keywords: Malawi rainfall, forecast model, predictors, SST
Procedia PDF Downloads 3887057 Efficient Reconstruction of DNA Distance Matrices Using an Inverse Problem Approach
Authors: Boris Melnikov, Ye Zhang, Dmitrii Chaikovskii
Abstract:
We continue to consider one of the cybernetic methods in computational biology related to the study of DNA chains. Namely, we are considering the problem of reconstructing the not fully filled distance matrix of DNA chains. When applied in a programming context, it is revealed that with a modern computer of average capabilities, creating even a small-sized distance matrix for mitochondrial DNA sequences is quite time-consuming with standard algorithms. As the size of the matrix grows larger, the computational effort required increases significantly, potentially spanning several weeks to months of non-stop computer processing. Hence, calculating the distance matrix on conventional computers is hardly feasible, and supercomputers are usually not available. Therefore, we started publishing our variants of the algorithms for calculating the distance between two DNA chains; then, we published algorithms for restoring partially filled matrices, i.e., the inverse problem of matrix processing. In this paper, we propose an algorithm for restoring the distance matrix for DNA chains, and the primary focus is on enhancing the algorithms that shape the greedy function within the branches and boundaries method framework.Keywords: DNA chains, distance matrix, optimization problem, restoring algorithm, greedy algorithm, heuristics
Procedia PDF Downloads 1167056 A Study on Computational Fluid Dynamics (CFD)-Based Design Optimization Techniques Using Multi-Objective Evolutionary Algorithms (MOEA)
Authors: Ahmed E. Hodaib, Mohamed A. Hashem
Abstract:
In engineering applications, a design has to be as fully perfect as possible in some defined case. The designer has to overcome many challenges in order to reach the optimal solution to a specific problem. This process is called optimization. Generally, there is always a function called “objective function” that is required to be maximized or minimized by choosing input parameters called “degrees of freedom” within an allowed domain called “search space” and computing the values of the objective function for these input values. It becomes more complex when we have more than one objective for our design. As an example for Multi-Objective Optimization Problem (MOP): A structural design that aims to minimize weight and maximize strength. In such case, the Pareto Optimal Frontier (POF) is used, which is a curve plotting two objective functions for the best cases. At this point, a designer should make a decision to choose the point on the curve. Engineers use algorithms or iterative methods for optimization. In this paper, we will discuss the Evolutionary Algorithms (EA) which are widely used with Multi-objective Optimization Problems due to their robustness, simplicity, suitability to be coupled and to be parallelized. Evolutionary algorithms are developed to guarantee the convergence to an optimal solution. An EA uses mechanisms inspired by Darwinian evolution principles. Technically, they belong to the family of trial and error problem solvers and can be considered global optimization methods with a stochastic optimization character. The optimization is initialized by picking random solutions from the search space and then the solution progresses towards the optimal point by using operators such as Selection, Combination, Cross-over and/or Mutation. These operators are applied to the old solutions “parents” so that new sets of design variables called “children” appear. The process is repeated until the optimal solution to the problem is reached. Reliable and robust computational fluid dynamics solvers are nowadays commonly utilized in the design and analyses of various engineering systems, such as aircraft, turbo-machinery, and auto-motives. Coupling of Computational Fluid Dynamics “CFD” and Multi-Objective Evolutionary Algorithms “MOEA” has become substantial in aerospace engineering applications, such as in aerodynamic shape optimization and advanced turbo-machinery design.Keywords: mathematical optimization, multi-objective evolutionary algorithms "MOEA", computational fluid dynamics "CFD", aerodynamic shape optimization
Procedia PDF Downloads 2547055 Comparison of Different in vitro Models of the Blood-Brain Barrier for Study of Toxic Effects of Engineered Nanoparticles
Authors: Samir Dekali, David Crouzier
Abstract:
Due to their new physico-chemical properties engineered nanoparticles (ENPs) are increasingly employed in numerous industrial sectors (such as electronics, textile, aerospace, cosmetics, pharmaceuticals, food industry, etc). These new physico-chemical properties can also represent a threat for the human health. Consumers can notably be exposed involuntarily by different routes such as inhalation, ingestion or through the skin. Several studies recently reported a possible biodistribution of these ENPs on the blood-brain barrier (BBB). Consequently, there is a great need for developing BBB in vitro models representative of the in vivo situation and capable of rapidly and accurately assessing ENPs toxic effects and their potential translocation through this barrier. In this study, several in vitro models established with micro-endothelial brain cell lines of different origins (bEnd.3 mouse cell line or a new human cell line) co-cultivated or not with astrocytic cells (C6 rat or C8-B4 mouse cell lines) on Transwells® were compared using different endpoints: trans-endothelial resistance, permeability of the Lucifer yellow and protein junction labeling. Impact of NIST diesel exhaust particles on BBB cell viability is also discussed.Keywords: nanoparticles, blood-brain barrier, diesel exhaust particles, toxicology
Procedia PDF Downloads 4397054 Policy Compliance in Information Security
Authors: R. Manjula, Kaustav Bagchi, Sushant Ramesh, Anush Baskaran
Abstract:
In the past century, the emergence of information technology has had a significant positive impact on human life. While companies tend to be more involved in the completion of projects, the turn of the century has seen importance being given to investment in information security policies. These policies are essential to protect important data from adversaries, and thus following these policies has become one of the most important attributes revolving around information security models. In this research, we have focussed on the factors affecting information security policy compliance in two models : The theory of planned behaviour and the integration of the social bond theory and the involvement theory into a single model. Finally, we have given a proposal of where these theories would be successful.Keywords: information technology, information security, involvement theory, policies, social bond theory
Procedia PDF Downloads 3687053 An Output Oriented Super-Efficiency Model for Considering Time Lag Effect
Authors: Yanshuang Zhang, Byungho Jeong
Abstract:
There exists some time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in calculating efficiency of decision making units (DMU). Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. This problem can be resolved a super-efficiency model. However, a super efficiency model sometimes causes infeasibility problem. This paper suggests an output oriented super-efficiency model for efficiency evaluation under the consideration of time lag effect. A case example using a long term research project is given to compare the suggested model with the MpO modelKeywords: DEA, Super-efficiency, Time Lag, research activities
Procedia PDF Downloads 6557052 Numerical Aeroacoustics Investigation of Eroded and Coated Leading Edge of NACA 64- 618 Airfoil
Authors: Zeinab Gharibi, B. Stoevesandt, J. Peinke
Abstract:
Long term surface erosion of wind turbine blades, especially at the leading edge, impairs aerodynamic performance; therefore, lowers efficiency of the blades mostly in the high-speed rotor tip regions. Blade protection provides significant improvements in annual energy production, reduces costly downtime, and protects the integrity of the blades. However, this protection still influences the aerodynamic behavior, and broadband noise caused by interaction between the impinging turbulence and blade’s leading edge. This paper presents an extensive numerical aeroacoustics approach by investigating the sound power spectra of the eroded and coated NACA 64-618 wind turbine airfoil and evaluates aeroacoustics improvements after the protection procedure. Using computational fluid dynamics (CFD), different quasi 2D numerical grids were implemented and special attention was paid to the refinement of the boundary layers. The noise sources were captured and decoupled with acoustic propagation via the derived formulation of Curle’s analogy implemented in OpenFOAM. Therefore, the noise spectra were compared for clean, coated and eroded profiles in the range of chord-based Reynolds number (1.6e6 ≤ Re ≤ 11.5e6). Angle of attack was zero in all cases. Verifications were conducted for the clean profile using available experimental data. Sensitivity studies for the far-field were done on different observational positions. Furthermore, beamforming studies were done simulating an Archimedean spiral microphone array for far-field noise directivity patterns. Comparing the noise spectra of the coated and eroded geometries, results show that, coating clearly improves aerodynamic and acoustic performance of the eroded airfoil.Keywords: computational fluid dynamics, computational aeroacoustics, leading edge, OpenFOAM
Procedia PDF Downloads 2227051 Improving Student Programming Skills in Introductory Computer and Data Science Courses Using Generative AI
Authors: Genady Grabarnik, Serge Yaskolko
Abstract:
Generative Artificial Intelligence (AI) has significantly expanded its applicability with the incorporation of Large Language Models (LLMs) and become a technology with promise to automate some areas that were very difficult to automate before. The paper describes the introduction of generative Artificial Intelligence into Introductory Computer and Data Science courses and analysis of effect of such introduction. The generative Artificial Intelligence is incorporated in the educational process two-fold: For the instructors, we create templates of prompts for generation of tasks, and grading of the students work, including feedback on the submitted assignments. For the students, we introduce them to basic prompt engineering, which in turn will be used for generation of test cases based on description of the problems, generating code snippets for the single block complexity programming, and partitioning into such blocks of an average size complexity programming. The above-mentioned classes are run using Large Language Models, and feedback from instructors and students and courses’ outcomes are collected. The analysis shows statistically significant positive effect and preference of both stakeholders.Keywords: introductory computer and data science education, generative AI, large language models, application of LLMS to computer and data science education
Procedia PDF Downloads 577050 Characterization of 3D-MRP for Analyzing of Brain Balancing Index (BBI) Pattern
Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan
Abstract:
This paper discusses on power spectral density (PSD) characteristics which are extracted from three-dimensional (3D) electroencephalogram (EEG) models. The EEG signal recording was conducted on 150 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, the values of maximum PSD were extracted as features from the model. These features are analysed using mean relative power (MRP) and different mean relative power (DMRP) technique to observe the pattern among different brain balancing indexes. The results showed that by implementing these techniques, the pattern of brain balancing indexes can be clearly observed. Some patterns are indicates between index 1 to index 5 for left frontal (LF) and right frontal (RF).Keywords: power spectral density, 3D EEG model, brain balancing, mean relative power, different mean relative power
Procedia PDF Downloads 4737049 Numerical Investigation on the Interior Wind Noise of a Passenger Car
Authors: Liu Ying-jie, Lu Wen-bo, Peng Cheng-jian
Abstract:
With the development of the automotive technology and electric vehicle, the contribution of the wind noise on the interior noise becomes the main source of noise. The main transfer path which the exterior excitation is transmitted through is the greenhouse panels and side windows. Simulating the wind noise transmitted into the vehicle accurately in the early development stage can be very challenging. The basic methodologies of this study were based on the Lighthill analogy; the exterior flow field around a passenger car was computed using unsteady Computational Fluid Dynamics (CFD) firstly and then a Finite Element Method (FEM) was used to compute the interior acoustic response. The major findings of this study include: 1) The Sound Pressure Level (SPL) response at driver’s ear locations is mainly induced by the turbulence pressure fluctuation; 2) Peaks were found over the full frequency range. It is found that the methodology used in this study could predict the interior wind noise induced by the exterior aerodynamic excitation in industry.Keywords: wind noise, computational fluid dynamics, finite element method, passenger car
Procedia PDF Downloads 1707048 Experimental Validation of Computational Fluid Dynamics Used for Pharyngeal Flow Patterns during Obstructive Sleep Apnea
Authors: Pragathi Gurumurthy, Christina Hagen, Patricia Ulloa, Martin A. Koch, Thorsten M. Buzug
Abstract:
Obstructive sleep apnea (OSA) is a sleep disorder where the patient suffers a disturbed airflow during sleep due to partial or complete occlusion of the pharyngeal airway. Recently, numerical simulations have been used to better understand the mechanism of pharyngeal collapse. However, to gain confidence in the solutions so obtained, an experimental validation is required. Therefore, in this study an experimental validation of computational fluid dynamics (CFD) used for the study of human pharyngeal flow patterns during OSA is performed. A stationary incompressible Navier-Stokes equation solved using the finite element method was used to numerically study the flow patterns in a computed tomography-based human pharynx model. The inlet flow rate was set to 250 ml/s and such that a flat profile was maintained at the inlet. The outlet pressure was set to 0 Pa. The experimental technique used for the validation of CFD of fluid flow patterns is phase contrast-MRI (PC-MRI). Using the same computed tomography data of the human pharynx as in the simulations, a phantom for the experiment was 3 D printed. Glycerol (55.27% weight) in water was used as a test fluid at 25°C. Inflow conditions similar to the CFD study were simulated using an MRI compatible flow pump (CardioFlow-5000MR, Shelley Medical Imaging Technologies). The entire experiment was done on a 3 T MR system (Ingenia, Philips) with 108 channel body coil using an RF-spoiled, gradient echo sequence. A comparison of the axial velocity obtained in the pharynx from the numerical simulations and PC-MRI shows good agreement. The region of jet impingement and recirculation also coincide, therefore validating the numerical simulations. Hence, the experimental validation proves the reliability and correctness of the numerical simulations.Keywords: computational fluid dynamics, experimental validation, phase contrast-MRI, obstructive sleep apnea
Procedia PDF Downloads 3097047 Computational Fluid Dynamics Model of Various Types of Rocket Engine Nozzles
Authors: Konrad Pietrykowski, Michal Bialy, Pawel Karpinski, Radoslaw Maczka
Abstract:
The nozzle is an element of the rocket engine in which the conversion of the potential energy of gases generated during combustion into the kinetic energy of the gas stream takes place. The design parameters of the nozzle have a decisive influence on the ballistic characteristics of the engine. Designing a nozzle assembly is, therefore, one of the most responsible stages in developing a rocket engine design. The paper presents the results of the simulation of three types of rocket propulsion nozzles. Calculations were made using CFD (Computational Fluid Dynamics) in ANSYS Fluent software. The next types of nozzles differ in shape. The analysis was made of a conical nozzle, a bell type nozzle with a conical supersonic part and a bell type nozzle. Calculation results are presented in the form of pressure, velocity and kinetic energy distributions of turbulence in the longitudinal section. The courses of these values along the nozzles are also presented. The results show that the cone nozzle generates strong turbulence in the critical section. Which negatively affect the flow of the working medium. In the case of a bell nozzle, the transformation of the wall caused the elimination of flow disturbances in the critical section. This reduces the probability of waves forming before or after the trailing edge. The most sophisticated construction is the bell type nozzle. It allows you to maximize performance without adding extra weight. The bell type nozzle can be used as a starter and auxiliary engine nozzle due to its advantages. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).Keywords: computational fluid dynamics, nozzle, rocket engine, supersonic flow
Procedia PDF Downloads 1577046 Service Life Modelling of Concrete Deterioration Due to Biogenic Sulphuric Acid (BSA) Attack-State-of-an-Art-Review
Authors: Ankur Bansal, Shashank Bishnoi
Abstract:
Degradation of Sewage pipes, sewage pumping station and Sewage treatment plants(STP) is of major concern due to difficulty in their maintenance and the high cost of replacement. Most of these systems undergo degradation due to Biogenic sulphuric acid (BSA) attack. Since most of Waste water treatment system are underground, detection of this deterioration remains hidden. This paper presents a literature review, outlining the mechanism of this attack focusing on critical parameters of BSA attack, along with available models and software to predict the deterioration due to this attack. This paper critically examines the various steps and equation in various Models of BSA degradation, detail on assumptions and working of different softwares are also highlighted in this paper. The paper also focuses on the service life design technique available through various codes and method to integrate the servile life design with BSA degradation on concrete. In the end, various methods enhancing the resistance of concrete against Biogenic sulphuric acid attack are highlighted. It may be concluded that the effective modelling for degradation phenomena may bring positive economical and environmental impacts. With current computing capabilities integrated degradation models combining the various durability aspects can bring positive change for sustainable society.Keywords: concrete degradation, modelling, service life, sulphuric acid attack
Procedia PDF Downloads 3127045 A Study of Classification Models to Predict Drill-Bit Breakage Using Degradation Signals
Authors: Bharatendra Rai
Abstract:
Cutting tools are widely used in manufacturing processes and drilling is the most commonly used machining process. Although drill-bits used in drilling may not be expensive, their breakage can cause damage to expensive work piece being drilled and at the same time has major impact on productivity. Predicting drill-bit breakage, therefore, is important in reducing cost and improving productivity. This study uses twenty features extracted from two degradation signals viz., thrust force and torque. The methodology used involves developing and comparing decision tree, random forest, and multinomial logistic regression models for classifying and predicting drill-bit breakage using degradation signals.Keywords: degradation signal, drill-bit breakage, random forest, multinomial logistic regression
Procedia PDF Downloads 3507044 A Hybrid LES-RANS Approach to Analyse Coupled Heat Transfer and Vortex Structures in Separated and Reattached Turbulent Flows
Authors: C. D. Ellis, H. Xia, X. Chen
Abstract:
Experimental and computational studies investigating heat transfer in separated flows have been of increasing importance over the last 60 years, as efforts are being made to understand and improve the efficiency of components such as combustors, turbines, heat exchangers, nuclear reactors and cooling channels. Understanding of not only the time-mean heat transfer properties but also the unsteady properties is vital for design of these components. As computational power increases, more sophisticated methods of modelling these flows become available for use. The hybrid LES-RANS approach has been applied to a blunt leading edge flat plate, utilising a structured grid at a moderate Reynolds number of 20300 based on the plate thickness. In the region close to the wall, the RANS method is implemented for two turbulence models; the one equation Spalart-Allmaras model and Menter’s two equation SST k-ω model. The LES region occupies the flow away from the wall and is formulated without any explicit subgrid scale LES modelling. Hybridisation is achieved between the two methods by the blending of the nearest wall distance. Validation of the flow was obtained by assessing the mean velocity profiles in comparison to similar studies. Identifying the vortex structures of the flow was obtained by utilising the λ2 criterion to identify vortex cores. The qualitative structure of the flow compared with experiments of similar Reynolds number. This identified the 2D roll up of the shear layer, breaking down via the Kelvin-Helmholtz instability. Through this instability the flow progressed into hairpin like structures, elongating as they advanced downstream. Proper Orthogonal Decomposition (POD) analysis has been performed on the full flow field and upon the surface temperature of the plate. As expected, the breakdown of POD modes for the full field revealed a relatively slow decay compared to the surface temperature field. Both POD fields identified the most energetic fluctuations occurred in the separated and recirculation region of the flow. Latter modes of the surface temperature identified these levels of fluctuations to dominate the time-mean region of maximum heat transfer and flow reattachment. In addition to the current research, work will be conducted in tracking the movement of the vortex cores and the location and magnitude of temperature hot spots upon the plate. This information will support the POD and statistical analysis performed to further identify qualitative relationships between the vortex dynamics and the response of the surface heat transfer.Keywords: heat transfer, hybrid LES-RANS, separated and reattached flow, vortex dynamics
Procedia PDF Downloads 2307043 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 3877042 Three Dimensional Simulation of the Transient Modeling and Simulation of Different Gas Flows Velocity and Flow Distribution in Catalytic Converter with Porous Media
Authors: Amir Reza Radmanesh, Sina Farajzadeh Khosroshahi, Hani Sadr
Abstract:
The transient catalytic converter performance is governed by complex interactions between exhaust gas flow and the monolithic structure of the catalytic converter. Stringent emission regulations around the world necessitate the use of highly-efficient catalytic converters in vehicle exhaust systems. Computational fluid dynamics (CFD) is a powerful tool for calculating the flow field inside the catalytic converter. Radial velocity profiles, obtained by a commercial CFD code, present very good agreement with respective experimental results published in the literature. However the applicability of CFD for transient simulations is limited by the high CPU demands. In the present work, Geometric modeling ceramic monolith substrate is done with square shaped channel type of Catalytic converter and it is coated platinum and palladium. This example illustrates the effect of flow distribution on thermal response of a catalytic converter and different gas flow velocities, during the critical phase of catalytic converter warm up.Keywords: catalytic converter, computational fluid dynamic, porous media, velocity distribution
Procedia PDF Downloads 8567041 Machine Learning Algorithms for Rocket Propulsion
Authors: Rômulo Eustáquio Martins de Souza, Paulo Alexandre Rodrigues de Vasconcelos Figueiredo
Abstract:
In recent years, there has been a surge in interest in applying artificial intelligence techniques, particularly machine learning algorithms. Machine learning is a data-analysis technique that automates the creation of analytical models, making it especially useful for designing complex situations. As a result, this technology aids in reducing human intervention while producing accurate results. This methodology is also extensively used in aerospace engineering since this is a field that encompasses several high-complexity operations, such as rocket propulsion. Rocket propulsion is a high-risk operation in which engine failure could result in the loss of life. As a result, it is critical to use computational methods capable of precisely representing the spacecraft's analytical model to guarantee its security and operation. Thus, this paper describes the use of machine learning algorithms for rocket propulsion to aid the realization that this technique is an efficient way to deal with challenging and restrictive aerospace engineering activities. The paper focuses on three machine-learning-aided rocket propulsion applications: set-point control of an expander-bleed rocket engine, supersonic retro-propulsion of a small-scale rocket, and leak detection and isolation on rocket engine data. This paper describes the data-driven methods used for each implementation in depth and presents the obtained results.Keywords: data analysis, modeling, machine learning, aerospace, rocket propulsion
Procedia PDF Downloads 113