Search results for: computational domain
3204 A Comparative Study on Multimodal Metaphors in Public Service Advertising of China and Germany
Authors: Xing Lyu
Abstract:
Multimodal metaphor promotes the further development and refinement of multimodal discourse study. Cultural aspects matter a lot not only in creating but also in comprehending multimodal metaphor. By analyzing the target domain and the source domain in 10 public service advertisements of China and Germany about environmental protection, this paper compares the source when the target is alike in each multimodal metaphor in order to seek similarities and differences across cultures. The findings are as follows: first, the multimodal metaphors center around three major topics: the earth crisis, consequences of environmental damage, and appeal for environmental protection; second, the multimodal metaphors mainly grounded in three universal conceptual metaphors which focused on high level is up; earth is mother and all lives are precious. However, there are five Chinese culture-specific multimodal metaphors which are not discovered in Germany ads: east is high leve; a purposeful life is a journey; a nation is a person; good is clean, and water is mother. Since metaphors are excellent instruments on studying ideology, this study can be helpful on intercultural/cross-cultural communication.Keywords: multimodal metaphor, cultural aspects, public service advertising, cross-cultural communication
Procedia PDF Downloads 1733203 The Importance of Functioning and Disability Status Follow-Up in People with Multiple Sclerosis
Authors: Sanela Slavkovic, Congor Nad, Spela Golubovic
Abstract:
Background: The diagnosis of multiple sclerosis (MS) is a major life challenge and has repercussions on all aspects of the daily functioning of those attained by it – personal activities, social participation, and quality of life. Regular follow-up of only the neurological status is not informative enough so that it could provide data on the sort of support and rehabilitation that is required. Objective: The aim of this study was to establish the current level of functioning of persons attained by MS and the factors that influence it. Methods: The study was conducted in Serbia, on a sample of 108 persons with relapse-remitting form of MS, aged 20 to 53 (mean 39.86 years; SD 8.20 years). All participants were fully ambulatory. Methods applied in the study include Expanded Disability Status Scale-EDSS and World Health Organization Disability Assessment Schedule, WHODAS 2.0 (36-item version, self-administered). Results: Participants were found to experience the most problems in the domains of Participation, Mobility, Life activities and Cognition. The least difficulties were found in the domain of Self-care. Symptom duration was the only control variable with a significant partial contribution to the prediction of the WHODAS scale score (β=0.30, p < 0.05). The total EDSS score correlated with the total WHODAS 2.0 score (r=0.34, p=0.00). Statistically significant differences in the domain of EDSS 0-5.5 were found within categories (0-1.5; 2-3.5; 4-5.5). The more pronounced a participant’s EDSS score was, although not indicative of large changes in the neurological status, the more apparent the changes in the functional domain, i.e. in all areas covered by WHODAS 2.0. Pyramidal (β=0.34, p < 0.05) and Bowel and bladder (β=0.24, p < 0.05) functional systems were found to have a significant partial contribution to the prediction of the WHODAS score. Conclusion: Measuring functioning and disability is important in the follow-up of persons suffering from MS in order to plan rehabilitation and define areas in which additional support is needed.Keywords: disability, functionality, multiple sclerosis, rehabilitation
Procedia PDF Downloads 1193202 Computational Analysis of the Scaling Effects on the Performance of an Axial Compressor
Authors: Junting Xiang, Jörg Uwe Schlüter, Fei Duan
Abstract:
The miniaturization of gas turbines promises many advantages. Miniature gas turbines can be used for local power generation or the propulsion of small aircraft, such as UAV and MAV. However, experience shows that the miniaturization of conventional gas turbines, which are optimized at their current large size, leads to a substantial loss of efficiency and performance at smaller scales. This may be due to a number of factors, such as the Reynolds-number effect, the increased heat transfer, and manufacturing tolerances. In the present work, we focus on computational investigations of the Reynolds number effect and the wall heat transfer on the performance of axial compressor during its size change. The NASA stage 35 compressors are selected as the configuration in this study and Computational Fluid Dynamics (CFD) is used to carry out the miniaturization process and simulations. We perform parameter studies on the effect of Reynolds number and wall thermal conditions. Our results indicate a decrease of efficiency, if the compressor is miniaturized based on its original geometry due to the increase of viscous effects. The increased heat transfer through wall has only a small effect and will actually benefit compressor performance based on our study.Keywords: axial compressor, CFD, heat transfer, miniature gas turbines, Reynolds number
Procedia PDF Downloads 4163201 Introduction to Various Innovative Techniques Suggested for Seismic Hazard Assessment
Authors: Deepshikha Shukla, C. H. Solanki, Mayank K. Desai
Abstract:
Amongst all the natural hazards, earthquakes have the potential for causing the greatest damages. Since the earthquake forces are random in nature and unpredictable, the quantification of the hazards becomes important in order to assess the hazards. The time and place of a future earthquake are both uncertain. Since earthquakes can neither be prevented nor be predicted, engineers have to design and construct in such a way, that the damage to life and property are minimized. Seismic hazard analysis plays an important role in earthquake design structures by providing a rational value of input parameter. In this paper, both mathematical, as well as computational methods adopted by researchers globally in the past five years, will be discussed. Some mathematical approaches involving the concepts of Poisson’s ratio, Convex Set Theory, Empirical Green’s Function, Bayesian probability estimation applied for seismic hazard and FOSM (first-order second-moment) algorithm methods will be discussed. Computational approaches and numerical model SSIFiBo developed in MATLAB to study dynamic soil-structure interaction problem is discussed in this paper. The GIS-based tool will also be discussed which is predominantly used in the assessment of seismic hazards.Keywords: computational methods, MATLAB, seismic hazard, seismic measurements
Procedia PDF Downloads 3403200 An Automatic Model Transformation Methodology Based on Semantic and Syntactic Comparisons and the Granularity Issue Involved
Authors: Tiexin Wang, Sebastien Truptil, Frederick Benaben
Abstract:
Model transformation, as a pivotal aspect of Model-driven engineering, attracts more and more attentions both from researchers and practitioners. Many domains (enterprise engineering, software engineering, knowledge engineering, etc.) use model transformation principles and practices to serve to their domain specific problems; furthermore, model transformation could also be used to fulfill the gap between different domains: by sharing and exchanging knowledge. Since model transformation has been widely used, there comes new requirement on it: effectively and efficiently define the transformation process and reduce manual effort that involved in. This paper presents an automatic model transformation methodology based on semantic and syntactic comparisons, and focuses particularly on granularity issue that existed in transformation process. Comparing to the traditional model transformation methodologies, this methodology serves to a general purpose: cross-domain methodology. Semantic and syntactic checking measurements are combined into a refined transformation process, which solves the granularity issue. Moreover, semantic and syntactic comparisons are supported by software tool; manual effort is replaced in this way.Keywords: automatic model transformation, granularity issue, model-driven engineering, semantic and syntactic comparisons
Procedia PDF Downloads 3943199 Architecture - Performance Relationship in GPU Computing - Composite Process Flow Modeling and Simulations
Authors: Ram Mohan, Richard Haney, Ajit Kelkar
Abstract:
Current developments in computing have shown the advantage of using one or more Graphic Processing Units (GPU) to boost the performance of many computationally intensive applications but there are still limits to these GPU-enhanced systems. The major factors that contribute to the limitations of GPU(s) for High Performance Computing (HPC) can be categorized as hardware and software oriented in nature. Understanding how these factors affect performance is essential to develop efficient and robust applications codes that employ one or more GPU devices as powerful co-processors for HPC computational modeling. This research and technical presentation will focus on the analysis and understanding of the intrinsic interrelationship of both hardware and software categories on computational performance for single and multiple GPU-enhanced systems using a computationally intensive application that is representative of a large portion of challenges confronting modern HPC. The representative application uses unstructured finite element computations for transient composite resin infusion process flow modeling as the computational core, characteristics and results of which reflect many other HPC applications via the sparse matrix system used for the solution of linear system of equations. This work describes these various software and hardware factors and how they interact to affect performance of computationally intensive applications enabling more efficient development and porting of High Performance Computing applications that includes current, legacy, and future large scale computational modeling applications in various engineering and scientific disciplines.Keywords: graphical processing unit, software development and engineering, performance analysis, system architecture and software performance
Procedia PDF Downloads 3623198 Time Domain Dielectric Relaxation Microwave Spectroscopy
Authors: A. C. Kumbharkhane
Abstract:
Time domain dielectric relaxation microwave spectroscopy (TDRMS) is a term used to describe a technique of observing the time dependant response of a sample after application of time dependant electromagnetic field. A TDRMS probes the interaction of a macroscopic sample with a time dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the TDRS technique covers an extensive dynamical process. The corresponding frequencies range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy, which yield information on the motions of individual molecules. Recently, we have developed and established the TDR technique in laboratory that provides information regarding dielectric permittivity in the frequency range 10 MHz to 30 GHz. The TDR method involves the generation of step pulse with rise time of 20 pico-seconds in a coaxial line system and monitoring the change in pulse shape after reflection from the sample placed at the end of the coaxial line. There is a great interest to study the dielectric relaxation behaviour in liquid systems to understand the role of hydrogen bond in liquid system. The intermolecular interaction through hydrogen bonds in molecular liquids results in peculiar dynamical properties. The dynamics of hydrogen-bonded liquids have been studied. The theoretical model to explain the experimental results will be discussed.Keywords: microwave, time domain reflectometry (TDR), dielectric measurement, relaxation time
Procedia PDF Downloads 3363197 Local Spectrum Feature Extraction for Face Recognition
Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd ZaizuIlyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh
Abstract:
This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates.Keywords: local features modelling, face recognition system, Gaussian mixture models, Feret
Procedia PDF Downloads 6673196 Computational Model for Predicting Effective siRNA Sequences Using Whole Stacking Energy (ΔG) for Gene Silencing
Authors: Reena Murali, David Peter S.
Abstract:
The small interfering RNA (siRNA) alters the regulatory role of mRNA during gene expression by translational inhibition. Recent studies shows that up regulation of mRNA cause serious diseases like Cancer. So designing effective siRNA with good knockdown effects play an important role in gene silencing. Various siRNA design tools had been developed earlier. In this work, we are trying to analyze the existing good scoring second generation siRNA predicting tools and to optimize the efficiency of siRNA prediction by designing a computational model using Artificial Neural Network and whole stacking energy (ΔG), which may help in gene silencing and drug design in cancer therapy. Our model is trained and tested against a large data set of siRNA sequences. Validation of our results is done by finding correlation coefficient of experimental versus observed inhibition efficacy of siRNA. We achieved a correlation coefficient of 0.727 in our previous computational model and we could improve the correlation coefficient up to 0.753 when the threshold of whole tacking energy is greater than or equal to -32.5 kcal/mol.Keywords: artificial neural network, double stranded RNA, RNA interference, short interfering RNA
Procedia PDF Downloads 5263195 Bilateral Telecontrol of AutoMerlin Mobile Robot Using Time Domain Passivity Control
Authors: Aamir Shahzad, Hubert Roth
Abstract:
This paper is presenting the bilateral telecontrol of AutoMerlin Mobile Robot having communication delay. Passivity Observers has been designed to monitor the net energy at both ports of a two port network and if any or both ports become active making net energy negative, then the passivity controllers dissipate the proper energy to make the overall system passive in the presence of time delay. The environment force is modeled and sent back to human operator so that s/he can feel it and has additional information about the environment in the vicinity of mobile robot. The experimental results have been presented to show the performance and stability of bilateral controller. The results show the whenever the passivity observers observe active behavior then the passivity controller come into action to neutralize the active behavior to make overall system passive.Keywords: bilateral control, human operator, haptic device, communication network, time domain passivity control, passivity observer, passivity controller, time delay, mobile robot, environment force
Procedia PDF Downloads 3923194 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures
Authors: Mariem Saied, Jens Gustedt, Gilles Muller
Abstract:
We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments
Procedia PDF Downloads 1273193 Flow Control Optimisation Using Vortex Generators in Turbine Blade
Authors: J. Karthik, G. Vinayagamurthy
Abstract:
Aerodynamic flow control is achieved by interaction of flowing medium with corresponding structure so that its natural flow state is disturbed to delay the transition point. This paper explains the aerodynamic effect and optimized design of Vortex Generators on the turbine blade to achieve maximum flow control. The airfoil is chosen from NREL [National Renewable Energy Laboratory] S-series airfoil as they are characterized with good lift characteristics and lower noise. Vortex generators typically chosen are Ogival, Rectangular, Triangular and Tapered Fin shapes attached near leading edge. Vortex generators are typically distributed from the primary to tip of the blade section. The design wind speed is taken as 6m/s and the computational analysis is executed. The blade surface is simulated using k- ɛ SST model and results are compared with X-FOIL results. The computational results are validated using Wind Tunnel Testing of the blade corresponding to the design speed. The effect of Vortex generators on the flow characteristics is studied from the results of analysis. By comparing the computational and test results of all shapes of Vortex generators; the optimized design is achieved for effective flow control corresponding to the blade.Keywords: flow control, vortex generators, design optimisation, CFD
Procedia PDF Downloads 4083192 Geographic Information System for District Level Energy Performance Simulations
Authors: Avichal Malhotra, Jerome Frisch, Christoph van Treeck
Abstract:
The utilization of semantic, cadastral and topological data from geographic information systems (GIS) has exponentially increased for building and urban-scale energy performance simulations. Urban planners, simulation scientists, and researchers use virtual 3D city models for energy analysis, algorithms and simulation tools. For dynamic energy simulations at city and district level, this paper provides an overview of the available GIS data models and their levels of detail. Adhering to different norms and standards, these models also intend to describe building and construction industry data. For further investigations, CityGML data models are considered for simulations. Though geographical information modelling has considerably many different implementations, extensions of virtual city data can also be made for domain specific applications. Highlighting the use of the extended CityGML models for energy researches, a brief introduction to the Energy Application Domain Extension (ADE) along with its significance is made. Consequently, addressing specific input simulation data, a workflow using Modelica underlining the usage of GIS information and the quantification of its significance over annual heating energy demand is presented in this paper.Keywords: CityGML, EnergyADE, energy performance simulation, GIS
Procedia PDF Downloads 1683191 Experimental and Computational Fluid Dynamics Analysis of Horizontal Axis Wind Turbine
Authors: Saim Iftikhar Awan, Farhan Ali
Abstract:
Wind power has now become one of the most important resources of renewable energy. The machine which extracts kinetic energy from wind is wind turbine. This work is all about the electrical power analysis of horizontal axis wind turbine to check the efficiency of different configurations of wind turbines to get maximum output and comparison of experimental and Computational Fluid Dynamics (CFD) results. Different experiments have been performed to obtain that configuration with the help of which we can get the maximum electrical power output by changing the different parameters like the number of blades, blade shape, wind speed, etc. in first step experimentation is done, and then the similar configuration is designed in 3D CAD software. After a series of experiments, it has been found that the turbine with four blades at an angle of 75° gives maximum power output and increase in wind speed increases the power output. The models designed on CAD software are imported on ANSYS-FLUENT to predict mechanical power. This mechanical power is then converted into electrical power, and the results were approximately the same in both cases. In the end, a comparison has been done to compare the results of experiments and ANSYS-FLUENT.Keywords: computational analysis, power efficiency, wind energy, wind turbine
Procedia PDF Downloads 1593190 Cloud-Based Mobile-to-Mobile Computation Offloading
Authors: Ebrahim Alrashed, Yousef Rafique
Abstract:
Mobile devices have drastically changed the way we do things on the move. They are being extremely relied on to perform tasks that are analogous to desktop computer capability. There has been a rapid increase of computational power on these devices; however, battery technology is still the bottleneck of evolution. The primary modern approach day approach to tackle this issue is offloading computation to the cloud, proving to be latency expensive and requiring high network bandwidth. In this paper, we explore efforts to perform barter-based mobile-to-mobile offloading. We present define a protocol and present an architecture to facilitate the development of such a system. We further highlight the deployment and security challenges.Keywords: computational offloading, power conservation, cloud, sandboxing
Procedia PDF Downloads 3883189 Molecular Interactions Driving RNA Binding to hnRNPA1 Implicated in Neurodegeneration
Authors: Sakina Fatima, Joseph-Patrick W. E. Clarke, Patricia A. Thibault, Subha Kalyaanamoorthy, Michael Levin, Aravindhan Ganesan
Abstract:
Heteronuclear ribonucleoprotein (hnRNPA1 or A1) is associated with the pathology of different diseases, including neurological disorders and cancers. In particular, the aggregation and dysfunction of A1 have been identified as a critical driver for neurodegeneration (NDG) in Multiple Sclerosis (MS). Structurally, A1 includes a low-complexity domain (LCD) and two RNA-recognition motifs (RRMs), and their interdomain coordination may play a crucial role in A1 aggregation. Previous studies propose that RNA-inhibitors or nucleoside analogs that bind to RRMs can potentially prevent A1 self-association. Therefore, molecular-level understanding of the structures, dynamics, and nucleotide interactions with A1 RRMs can be useful for developing therapeutics for NDG in MS. In this work, a combination of computational modelling and biochemical experiments were employed to analyze a set of RNA-A1 RRM complexes. Initially, the atomistic models of RNA-RRM complexes were constructed by modifying known crystal structures (e.g., PDBs: 4YOE and 5MPG), and through molecular docking calculations. The complexes were optimized using molecular dynamics simulations (200-400 ns), and their binding free energies were computed. The binding affinities of the selected complexes were validated using a thermal shift assay. Further, the most important molecular interactions that contributed to the overall stability of the RNA-A1 RRM complexes were deduced. The results highlight that adenine and guanine are the most suitable nucleotides for high-affinity binding with A1. These insights will be useful in the rational design of nucleotide-analogs for targeting A1 RRMs.Keywords: hnRNPA1, molecular docking, molecular dynamics, RNA-binding proteins
Procedia PDF Downloads 1193188 Computational Study of Flow and Heat Transfer Characteristics of an Incompressible Fluid in a Channel Using Lattice Boltzmann Method
Authors: Imdat Taymaz, Erman Aslan, Kemal Cakir
Abstract:
The Lattice Boltzmann Method (LBM) is performed to computationally investigate the laminar flow and heat transfer of an incompressible fluid with constant material properties in a 2D channel with a built-in triangular prism. Both momentum and energy transport is modelled by the LBM. A uniform lattice structure with a single time relaxation rule is used. Interpolation methods are applied for obtaining a higher flexibility on the computational grid, where the information is transferred from the lattice structure to the computational grid by Lagrange interpolation. The flow is researched on for different Reynolds number, while Prandtl number is keeping constant as a 0.7. The results show how the presence of a triangular prism effects the flow and heat transfer patterns for the steady-state and unsteady-periodic flow regimes. As an evaluation of the accuracy of the developed LBM code, the results are compared with those obtained by a commercial CFD code. It is observed that the present LBM code produces results that have similar accuracy with the well-established CFD code, as an additionally, LBM needs much smaller CPU time for the prediction of the unsteady phonema.Keywords: laminar forced convection, lbm, triangular prism
Procedia PDF Downloads 3733187 Computational Study of Blood Flow Analysis for Coronary Artery Disease
Authors: Radhe Tado, Ashish B. Deoghare, K. M. Pandey
Abstract:
The aim of this study is to estimate the effect of blood flow through the coronary artery in human heart so as to assess the coronary artery disease.Velocity, wall shear stress (WSS), strain rate and wall pressure distribution are some of the important hemodynamic parameters that are non-invasively assessed with computational fluid dynamics (CFD). These parameters are used to identify the mechanical factors responsible for the plaque progression and/or rupture in left coronary arteries (LCA) in coronary arteries.The initial step for CFD simulations was the construction of a geometrical model of the LCA. Patient specific artery model is constructed using computed tomography (CT) scan data with the help of MIMICS Research 19.0. For CFD analysis ANSYS FLUENT-14.5 is used.Hemodynamic parameters were quantified and flow patterns were visualized both in the absence and presence of coronary plaques. The wall pressure continuously decreased towards distal segments and showed pressure drops in stenotic segments. Areas of high WSS and high flow velocities were found adjacent to plaques deposition.Keywords: angiography, computational fluid dynamics (CFD), time-average wall shear stress (TAWSS), wall pressure, wall shear stress (WSS)
Procedia PDF Downloads 1833186 Optimization of Pumping Power of Water between Reservoir Using Ant Colony System
Authors: Thiago Ribeiro De Alencar, Jacyro Gramulia Junior, Patricia Teixeira Leite Asano
Abstract:
The area of the electricity sector that deals with energy needs by the hydropower and thermoelectric in a coordinated way is called Planning Operating Hydrothermal Power Systems. The aim of this area is to find a political operative to provide electrical power to the system in a specified period with minimization of operating cost. This article proposes a computational tool for solving the planning problem. In addition, this article will be introducing a methodology to find new transfer points between reservoirs increasing energy production in hydroelectric power plants cascade systems. The computational tool proposed in this article applies: i) genetic algorithms to optimize the water transfer and operation of hydroelectric plants systems; and ii) Ant Colony algorithm to find the trajectory with the least energy pumping for the construction of pipes transfer between reservoirs considering the topography of the region. The computational tool has a database consisting of 35 hydropower plants and 41 reservoirs, which are part of the southeastern Brazilian system, which has been implemented in an individualized way.Keywords: ant colony system, genetic algorithms, hydroelectric, hydrothermal systems, optimization, water transfer between rivers
Procedia PDF Downloads 3263185 Active Contours for Image Segmentation Based on Complex Domain Approach
Authors: Sajid Hussain
Abstract:
The complex domain approach for image segmentation based on active contour has been designed, which deforms step by step to partition an image into numerous expedient regions. A novel region-based trigonometric complex pressure force function is proposed, which propagates around the region of interest using image forces. The signed trigonometric force function controls the propagation of the active contour and the active contour stops on the exact edges of the object accurately. The proposed model makes the level set function binary and uses Gaussian smoothing kernel to adjust and escape the re-initialization procedure. The working principle of the proposed model is as follows: The real image data is transformed into complex data by iota (i) times of image data and the average iota (i) times of horizontal and vertical components of the gradient of image data is inserted in the proposed model to catch complex gradient of the image data. A simple finite difference mathematical technique has been used to implement the proposed model. The efficiency and robustness of the proposed model have been verified and compared with other state-of-the-art models.Keywords: image segmentation, active contour, level set, Mumford and Shah model
Procedia PDF Downloads 1133184 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 1373183 Revised Tower Earthing Design in High-Voltage Transmission Network for High-Frequency Lightning Condition
Authors: Azwadi Mohamad, Pauzi Yahaya, Nadiah Hudi
Abstract:
Earthing system for high-voltage transmission tower is designed to protect the working personnel and equipments, and to maintain the quality of supply during fault. The existing earthing system for transmission towers in TNB’s system is purposely designed for normal power frequency (low-frequency) fault conditions that take into account the step and touch voltages. This earthing design is found to be inapt for lightning (transient) condition to a certain extent, which involves a high-frequency domain. The current earthing practice of laying the electrodes radially in straight 60 m horizontal lines under the ground, in order to achieve the specified impedance value of less than 10 Ω, was deemed ineffective in reducing the high-frequency impedance. This paper introduces a new earthing design that produces low impedance value at the high-frequency domain, without compromising the performance of low-frequency impedance. The performances of this new earthing design, as well as the existing design, are simulated for various soil resistivity values at varying frequency. The proposed concentrated earthing design is found to possess low TFR value at both low and high-frequency. A good earthing design should have a fine balance between compact and radial electrodes under the ground.Keywords: earthing design, high-frequency, lightning, tower footing impedance
Procedia PDF Downloads 1613182 Minimizing Total Completion Time in No-Wait Flowshops with Setup Times
Authors: Ali Allahverdi
Abstract:
The m-machine no-wait flowshop scheduling problem is addressed in this paper. The objective is to minimize total completion time subject to the constraint that the makespan value is not greater than a certain value. Setup times are treated as separate from processing times. Several recent algorithms are adapted and proposed for the problem. An extensive computational analysis has been conducted for the evaluation of the proposed algorithms. The computational analysis indicates that the best proposed algorithm performs significantly better than the earlier existing best algorithm.Keywords: scheduling, no-wait flowshop, algorithm, setup times, total completion time, makespan
Procedia PDF Downloads 3403181 CFD Simulation and Investigation of Critical Two-Phase Flow Rate in Wellhead Choke
Authors: Alireza Rafie Boldaji, Ahmad Saboonchi
Abstract:
Chokes are commonly used in oil and gas production systems. A choke is a restriction basically designed to control flow rates of oil and gas wells, to prevent the downstream disturbances from propagating upstream (critical flow), and to protect the surface equipment facilities against slugging at high flowing pressures. There are different methods to calculate the multiphase flow rate, one of the multiphase flow measurement methods is the separation and measurement by on¬e-phaseFlow meter, another common method is the use of movable separator, their operations are very labor-intensive and costly. The current method used is based on the flow differential pressure on both sides of choke. Three groups of correlations describing two-phase flow through wellhead chokes were examined. The first group involved simple empirical equations similar to those of Gilbert, the second group comprised derived equations of two-phase flow incorporating PVT properties, and third group is computational method. In the article we calculate the flow of oil and gas through choke with simulation of this two phase flow bye computational fluid dynamic method, we use Ansys- fluent for this simulation and finally compared results of computational simulation whit empirical equations, the results show good agreement between experimental and numerical results.Keywords: CFD, two-phase, choke, critical
Procedia PDF Downloads 2773180 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty
Authors: Ben Khayut, Lina Fabri, Maya Avikhana
Abstract:
The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.Keywords: computational brain, mind, psycholinguistic, system, under uncertainty
Procedia PDF Downloads 1773179 Collocation Method for Coupled System of Boundary Value Problems with Cubic B-Splines
Authors: K. N. S. Kasi Viswanadham
Abstract:
Coupled system of second order linear and nonlinear boundary value problems occur in various fields of Science and Engineering. In the formulation of the problem, any one of 81 possible types of boundary conditions may occur. These 81 possible boundary conditions are written as a combination of four boundary conditions. To solve a coupled system of boundary value problem with these converted boundary conditions, a collocation method with cubic B-splines as basis functions has been developed. In the collocation method, the mesh points of the space variable domain have been selected as the collocation points. The basis functions have been redefined into a new set of basis functions which in number match with the number of mesh points in the space variable domain. The solution of a non-linear boundary value problem has been obtained as the limit of a sequence of solutions of linear boundary value problems generated by quasilinearization technique. Several linear and nonlinear boundary value problems are presented to test the efficiency of the proposed method and found that numerical results obtained by the present method are in good agreement with the exact solutions available in the literature.Keywords: collocation method, coupled system, cubic b-splines, mesh points
Procedia PDF Downloads 2093178 ISIS after the Defeat of the Islamic Caliphate: The Rise of Cyber-Jihad
Authors: Spyridon Plakoudas
Abstract:
After the capture of Al-Raqqah and the defeat of the short-lived Islamic Caliphate in 2017, everyone predicted the end of ISIS. However, ISIS proved far more resilient than initially thought. The militant group quickly regrouped from its defeat and started a low-intensity guerrilla campaign in central Iraq (near Kirkuk and Mosul) and north-eastern Syria (near Deir ez-Zorr). At the same time, ISIS doubled down on its cyber-campaign; actually, ISIS is as active on the cyber-domain as during the peak of its power in 2015. This paper, a spin-off paper from a co-authored book on the Syrian Civil War (due to be published by Rowman and Littlefield), intends to examine how ISIS operates in the cyber-domain and how this "Cyber-Caliphate" under re-construction is associated with its post-2017 strategy. This paper will draw on the discipline of War Studies (with an emphasis on Cyber-Security and Insurgency / Counter-Insurgency) and will benefit from the insights of interviewed experts on the field (e.g., Hassan Hasssan). This paper will explain how the successful operation of ISIS in the cyber-space preserves the myth of the “caliphate” amongst its worldwide followers (against the odds) and sustains the group’s ongoing insurgency in Syria and Iraq; in addition, this paper will suggest how this cyber-threat can be countered best.Keywords: ISIS, cyber-jihad, Syrian Civil War, cyber-terrorism, insurgency and counter-insurgency
Procedia PDF Downloads 1343177 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback
Authors: Yael Neumann
Abstract:
Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).Keywords: attention, language, cognitive rehabilitation, neurofeedback
Procedia PDF Downloads 173176 Mapping Feature Models to Code Using a Reference Architecture: A Case Study
Authors: Karam Ignaim, Joao M. Fernandes, Andre L. Ferreira
Abstract:
Mapping the artifacts coming from a set of similar products family developed in an ad-hoc manner to make up the resulting software product line (SPL) plays a key role to maintain the consistency between requirements and code. This paper presents a feature mapping approach that focuses on tracing the artifact coming from the migration process, the current feature model (FM), to the other artifacts of the resulting SPL, the reference architecture, and code. Thus, our approach relates each feature of the current FM to its locations in the implementation code, using the reference architecture as an intermediate artifact (as a centric point) to preserve consistency among them during an SPL evolution. The approach uses a particular artifact (i.e., traceability tree) as a solution for managing the mapping process. Tool support is provided using friendlyMapper. We have evaluated the feature mapping approach and tool support by putting the approach into practice (i.e., conducting a case study) of the automotive domain for Classical Sensor Variants Family at Bosch Car Multimedia S.A. The evaluation reveals that the mapping approach presented by this paper fits the automotive domain.Keywords: feature location, feature models, mapping, software product lines, traceability
Procedia PDF Downloads 1273175 Accelerating Molecular Dynamics Simulations of Electrolytes with Neural Network: Bridging the Gap between Ab Initio Molecular Dynamics and Classical Molecular Dynamics
Authors: Po-Ting Chen, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
Classical molecular dynamics (CMD) simulations are highly efficient for material simulations but have limited accuracy. In contrast, ab initio molecular dynamics (AIMD) provides high precision by solving the Kohn–Sham equations yet requires significant computational resources, restricting the size of systems and time scales that can be simulated. To address these challenges, we employed NequIP, a machine learning model based on an E(3)-equivariant graph neural network, to accelerate molecular dynamics simulations of a 1M LiPF6 in EC/EMC (v/v 3:7) for Li battery applications. AIMD calculations were initially conducted using the Vienna Ab initio Simulation Package (VASP) to generate highly accurate atomic positions, forces, and energies. This data was then used to train the NequIP model, which efficiently learns from the provided data. NequIP achieved AIMD-level accuracy with significantly less training data. After training, NequIP was integrated into the LAMMPS software to enable molecular dynamics simulations of larger systems over longer time scales. This method overcomes the computational limitations of AIMD while improving the accuracy limitations of CMD, providing an efficient and precise computational framework. This study showcases NequIP’s applicability to electrolyte systems, particularly for simulating the dynamics of LiPF6 ionic mixtures. The results demonstrate substantial improvements in both computational efficiency and simulation accuracy, highlighting the potential of machine learning models to enhance molecular dynamics simulations.Keywords: lithium-ion batteries, electrolyte simulation, molecular dynamics, neural network
Procedia PDF Downloads 17