Search results for: computational linguistics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2325

Search results for: computational linguistics

615 Simplified Analysis Procedure for Seismic Evaluation of Tall Building at Structure and Component Level

Authors: Tahir Mehmood, Pennung Warnitchai

Abstract:

Simplified static analysis procedures such Nonlinear Static Procedure (NSP) are gaining popularity for the seismic evaluation of buildings. However, these simplified procedures accounts only for the seismic responses of the fundamental vibration mode of the structure. Some other procedures which can take into account the higher modes of vibration, lack in accuracy to determine the component responses. Hence, such procedures are not suitable for evaluating the structures where many vibration modes may participate significantly or where component responses are needed to be evaluated. Moreover, these procedures were found to either computationally expensive or tedious to obtain individual component responses. In this paper, a simplified but accurate procedure is studied. It is called the Uncoupled Modal Response History Analysis (UMRHA) procedure. In this procedure, the nonlinear response of each vibration mode is first computed, and they are later on combined into the total response of the structure. The responses of four tall buildings are computed by this simplified UMRHA procedure and compared with those obtained from the NLRHA procedure. The comparison shows that the UMRHA procedure is able to accurately compute the global responses, i.e., story shears and story overturning moments, floor accelerations and inter-story drifts as well as the component level responses of these tall buildings with heights varying from 20 to 44 stories. The required computational effort is also extremely low compared to that of the Nonlinear Response History Analysis (NLRHA) procedure.

Keywords: higher mode effects, seismic evaluation procedure, tall buildings, component responses

Procedia PDF Downloads 342
614 Numerical Investigation of Tsunami Flow Characteristics and Energy Reduction through Flexible Vegetation

Authors: Abhishek Mukherjee, Juan C. Cajas, Jenny Suckale, Guillaume Houzeaux, Oriol Lehmkuhl, Simone Marras

Abstract:

The investigation of tsunami flow characteristics and the quantification of tsunami energy reduction through the coastal vegetation is important to understand the protective benefits of nature-based mitigation parks. In the present study, a three-dimensional non-hydrostatic incompressible Computational Fluid Dynamics model with a two-way coupling enabled fluid-structure interaction approach (FSI) is used. After validating the numerical model against experimental data, tsunami flow characteristics have been investigated by varying vegetation density, modulus of elasticity, the gap between stems, and arrangement or distribution of vegetation patches. Streamwise depth average velocity profiles, turbulent kinetic energy, energy flux reflection, and dissipation extracted by the numerical study will be presented in this study. These diagnostics are essential to assess the importance of different parameters to design the proper coastal defense systems. When a tsunami wave reaches the shore, it transforms into undular bores, which induce scour around offshore structures and sediment transport. The bed shear stress, instantaneous turbulent kinetic energy, and the vorticity near-bed will be presented to estimate the importance of vegetation to prevent tsunami-induced scour and sediment transport.

Keywords: coastal defense, energy flux, fluid-structure interaction, natural hazards, sediment transport, tsunami mitigation

Procedia PDF Downloads 150
613 Over Cracking in Furnace and Corrective Action by Computational Fluid Dynamics (CFD) Analysis

Authors: Mokhtari Karchegani Amir, Maboudi Samad, Azadi Reza, Dastanian Raoof

Abstract:

Marun's petrochemical cracking furnaces have a very comprehensive operating control system for combustion and related equipment, utilizing advanced instrument circuits. However, after several years of operation, numerous problems arose in the pyrolysis furnaces. A team of experts conducted an audit, revealing that the furnaces were over-designed, leading to excessive consumption of air and fuel. This issue was related to the burners' shutter settings, which had not been configured properly. The operations department had responded by increasing the induced draft fan speed and forcing the instrument switches to counteract the wind effect in the combustion chamber. Using Fluent and Gambit software, the furnaces were analyzed. The findings indicated that this situation elevated the convection part's temperature, causing uneven heat distribution inside the furnace. Consequently, this led to overheating in the convection section and excessive cracking within the coils in the radiation section. The increased convection temperature damaged convection parts and resulted in equipment blockages downstream of the furnaces due to the production of more coke and tar in the process. To address these issues, corrective actions were implemented. The excess air for burners and combustion chambers was properly set, resulting in improved efficiency, reduced emissions of environmentally harmful gases, prevention of creep in coils, decreased fuel consumption, and lower maintenance costs.

Keywords: furnace, coke, CFD analysis, over cracking

Procedia PDF Downloads 77
612 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 311
611 Physical Characterization of a Watershed for Correlation with Parameters of Thomas Hydrological Model and Its Application in Iber Hidrodinamic Model

Authors: Carlos Caro, Ernest Blade, Nestor Rojas

Abstract:

This study determined the relationship between basic geo-technical parameters and parameters of the hydro logical model Thomas for water balance of rural watersheds, as a methodological calibration application, applicable in distributed models as IBER model, which represents a distributed system simulation models for unsteady flow numerical free surface. There was an exploration in 25 points (on 15 sub) basin of Rio Piedras (Boy.) obtaining soil samples, to which geo-technical characterization was performed by laboratory tests. Thomas model has a physical characterization of the input area by only four parameters (a, b, c, d). Achieve measurable relationship between geo technical parameters and 4 values of hydro logical parameters helps to determine subsurface, underground and surface flow more agile manner. It is intended in this way to reach some solutions regarding limits initial model parameters on the basis of Thomas geo-technical characterization. In hydro geological models of rural watersheds, calibration is an important process in the characterization of the study area. This step can require a significant computational cost and time, especially if the initial values or parameters before calibration are outside of the geo-technical reality. A better approach in these initial values means optimization of these process through a geo-technical materials area, where is obtained an important approach to the study as in the starting range of variation for the calibration parameters.

Keywords: distributed hydrology, hydrological and geotechnical characterization, Iber model

Procedia PDF Downloads 522
610 Mandate of Heaven and Serving the People in Chinese Political Rhetoric: An Evolving Discourse System across Three Thousand Years

Authors: Weixiao Wei, Chris Shei

Abstract:

This paper describes Mandate of Heaven as a source of justification for the ruling regime from ancient China approximately three thousand years ago. Initially, the kings of Shang dynasty simply nominated themselves as the sons of Heaven sent to Earth to rule the common people. As the last generation of the kings became corrupted and ruled withbrutal force and crueltywhich directly caused their destruction, the successive kings of Zhou dynasty realised the importance of virtue and the provision of goods to the people. Legitimacy of the ruling regimes became rested not entirely on random allocation of the throne by an unknown supernatural force but on a foundation comprising morality and the ability to provide goods. The latter composite was picked up by the current ruling regime, the Chinese Communist Party, and became the cornerstone of its political legitimacy, also known as ‘performance legitimacy’ where economic development accounts for the satisfaction of the people in place of election and other democratic means of providing legal-rational legitimacy. Under this circumstance, it becomes important as well for the ruling party to use political rhetoric to convince people of the good performance of the government in the economy, morality, and foreign policy. Thus, we see a lot of propaganda materials in both government policy statements and international press conference announcements. The former consists mainly of important speeches made by prominent figures in Party conferences which are not only made publicly available on the government websites but also become obligatory reading materials for university entrance examinations. The later consists of announcements about foreign policies and strategies and actions taken by the government regarding foreign affairsmade in international conferences and offered in Chinese-English bilingual versions on official websites. This documentation strategy creates an impressive image of the Chinese Communist Party that is domestically competent and international strong, taking care of the people it governs in terms of economic needs and defending the country against any foreign interference and global adversities. This political discourse system comprising reading materials fully extractable from government websites also becomes excellent repertoire for teaching and researching in contemporary Chinese language, discourse and rhetoric, Chinese culture and tradition, Chinese political ideology, and Chinese-English translation. This paper aims to provide a detailed and comprehensive description of the current Chinese political discourse system, arguing about its lineage from the rhetorical convention of Mandate of Heaven in ancient China and its current concentration on serving the people in place of election, human rights, and freedom of speech. The paper will also provide guidelines as to how this discourse system and the manifestation of official documents created under this system can become excellent research and teaching materials in applied linguistics.

Keywords: mandate of heaven, Chinese communist party, performance legitimacy, serving the people, political discourse

Procedia PDF Downloads 110
609 Measurement of Solids Concentration in Hydrocyclone Using ERT: Validation Against CFD

Authors: Vakamalla Teja Reddy, Narasimha Mangadoddy

Abstract:

Hydrocyclones are used to separate particles into different size fractions in the mineral processing, chemical and metallurgical industries. High speed video imaging, Laser Doppler Anemometry (LDA), X-ray and Gamma ray tomography are previously used to measure the two-phase flow characteristics in the cyclone. However, investigation of solids flow characteristics inside the cyclone is often impeded by the nature of the process due to slurry opaqueness and solid metal wall vessels. In this work, a dual-plane high speed Electrical resistance tomography (ERT) is used to measure hydrocyclone internal flow dynamics in situ. Experiments are carried out in 3 inch hydrocyclone for feed solid concentrations varying in the range of 0-50%. ERT data analysis through the optimized FEM mesh size and reconstruction algorithms on air-core and solid concentration tomograms is assessed. Results are presented in terms of the air-core diameter and solids volume fraction contours using Maxwell’s equation for various hydrocyclone operational parameters. It is confirmed by ERT that the air core occupied area and wall solids conductivity levels decreases with increasing the feed solids concentration. Algebraic slip mixture based multi-phase computational fluid dynamics (CFD) model is used to predict the air-core size and the solid concentrations in the hydrocyclone. Validation of air-core size and mean solid volume fractions by ERT measurements with the CFD simulations is attempted.

Keywords: air-core, electrical resistance tomography, hydrocyclone, multi-phase CFD

Procedia PDF Downloads 379
608 Evaluation of Non-Staggered Body-Fitted Grid Based Solution Method in Application to Supercritical Fluid Flows

Authors: Suresh Sahu, Abhijeet M. Vaidya, Naresh K. Maheshwari

Abstract:

The efforts to understand the heat transfer behavior of supercritical water in supercritical water cooled reactor (SCWR) are ongoing worldwide to fulfill the future energy demand. The higher thermal efficiency of these reactors compared to a conventional nuclear reactor is one of the driving forces for attracting the attention of nuclear scientists. In this work, a solution procedure has been described for solving supercritical fluid flow problems in complex geometries. The solution procedure is based on non-staggered grid. All governing equations are discretized by finite volume method (FVM) in curvilinear coordinate system. Convective terms are discretized by first-order upwind scheme and central difference approximation has been used to discretize the diffusive parts. k-ε turbulence model with standard wall function has been employed. SIMPLE solution procedure has been implemented for the curvilinear coordinate system. Based on this solution method, 3-D Computational Fluid Dynamics (CFD) code has been developed. In order to demonstrate the capability of this CFD code in supercritical fluid flows, heat transfer to supercritical water in circular tubes has been considered as a test problem. Results obtained by code have been compared with experimental results reported in literature.

Keywords: curvilinear coordinate, body-fitted mesh, momentum interpolation, non-staggered grid, supercritical fluids

Procedia PDF Downloads 130
607 Revealing Potential Drug Targets against Proto-Oncogene Wnt10B by Comparative Molecular Docking

Authors: Shazia Mannan, Zunera Khalid, Hammad-Ul-Mubeen

Abstract:

Wingless type Mouse mammary tumor virus (MMTV) Integration site-10B (Wnt10B) is an important member of the Wnt protein family that functions as cellular messenger in paracrine manner. Aberrant Wnt10B activity is the cause of several abnormalities including cancers of breast, cervix, liver, gastric tract, esophagus, pancreas as well as physiological problems like obesity, and osteoporosis. The objective of this study was to determine the possible inhibitors against aberrant expression of Wnt10B in order to prevent and treat the physiological disorders associated with it. Wnt10B3D structure was predicted by using comparative modeling and then analyzed by PROCHECK, Verify3D, and Errat. The model having 84.54% quality value was selected and acylated to satisfy the hydrophobic nature of Wnt10B. For search of inhibitors, virtual screening was performed on Natural Products (NP) database. The compounds were filtered and ligand-based screening was performed using the antagonist for mouse Wnt-3A. This resulted in a library of 272 unique compounds having most potent drug like activities for Wnt-4. Out of the 271 molecules analyzed three small molecules ZINC35442871, ZINC85876388, and ZINC00754234 having activity against Wnt4 abbarent expression were found common through docking experiment of Wnt10B. It is concluded that the three molecules ZINC35442871, ZINC85876388, and ZINC00754234 can be considered as lead compounds for performing further drug designing experiments against aberrant Wnt expressions.

Keywords: Wnt10B inhibitors, comparative computational studies, proto-oncogene, molecular docking

Procedia PDF Downloads 156
606 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study

Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker

Abstract:

In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.

Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning

Procedia PDF Downloads 142
605 Providing Reliability, Availability and Scalability Support for Quick Assist Technology Cryptography on the Cloud

Authors: Songwu Shen, Garrett Drysdale, Veerendranath Mannepalli, Qihua Dai, Yuan Wang, Yuli Chen, David Qian, Utkarsh Kakaiya

Abstract:

Hardware accelerator has been a promising solution to reduce the cost of cloud data centers. This paper investigates the QoS enhancement of the acceleration of an important datacenter workload: the webserver (or proxy) that faces high computational consumption originated from secure sockets layer (SSL) or transport layer security (TLS) procession in the cloud environment. Our study reveals that for the accelerator maintenance cases—need to upgrade driver/firmware or hardware reset due to hardware hang; we still can provide cryptography services by switching to software during maintenance phase and then switching back to accelerator after maintenance. The switching is seamless to server application such as Nginx that runs inside a VM on top of the server. To achieve this high availability goal, we propose a comprehensive fallback solution based on Intel® QuickAssist Technology (QAT). This approach introduces an architecture that involves the collaboration between physical function (PF) and virtual function (VF), and collaboration among VF, OpenSSL, and web application Nginx. The evaluation shows that our solution could provide high reliability, availability, and scalability (RAS) of hardware cryptography service in a 7x24x365 manner in the cloud environment.

Keywords: accelerator, cryptography service, RAS, secure sockets layer/transport layer security, SSL/TLS, virtualization fallback architecture

Procedia PDF Downloads 159
604 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.

Keywords: algorithm, LiDAR, object recognition, OBIA

Procedia PDF Downloads 244
603 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping

Procedia PDF Downloads 125
602 Predicting of Hydrate Deposition in Loading and Offloading Flowlines of Marine CNG Systems

Authors: Esam I. Jassim

Abstract:

The main aim of this paper is to demonstrate the prediction of the model capability of predicting the nucleation process, the growth rate, and the deposition potential of second phase particles in gas flowlines. The primary objective of the research is to predict the risk hazards involved in the marine transportation of compressed natural gas. However, the proposed model can be equally used for other applications including production and transportation of natural gas in any high-pressure flow-line. The proposed model employs the following three main components to approach the problem: computational fluid dynamics (CFD) technique is used to configure the flow field; the nucleation model is developed and incorporated in the simulation to predict the incipient hydrate particles size and growth rate; and the deposition of the gas/particle flow is proposed using the concept of the particle deposition velocity. These components are integrated in a comprehended model to locate the hydrate deposition in natural gas flowlines. The present research is prepared to foresee the deposition location of solid particles that could occur in a real application in Compressed Natural Gas loading and offloading. A pipeline with 120 m length and different sizes carried a natural gas is taken in the study. The location of particle deposition formed as a result of restriction is determined based on the procedure mentioned earlier and the effect of water content and downstream pressure is studied. The critical flow speed that prevents such particle to accumulate in the certain pipe length is also addressed.

Keywords: hydrate deposition, compressed natural gas, marine transportation, oceanography

Procedia PDF Downloads 487
601 Artificial Intelligence in the Design of a Retaining Structure

Authors: Kelvin Lo

Abstract:

Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.

Keywords: automation, numerical modelling, Python, retaining structures

Procedia PDF Downloads 51
600 An Approach to Secure Mobile Agent Communication in Multi-Agent Systems

Authors: Olumide Simeon Ogunnusi, Shukor Abd Razak, Michael Kolade Adu

Abstract:

Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Until now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a 2-phase policy enforcement system was developed. The first phase constrains the external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.

Keywords: agent communication, introspective agent, isolation of agent, policy enforcement system

Procedia PDF Downloads 297
599 Physical and Morphological Response to Land Reclamation Projects in a Wave-Dominated Bay

Authors: Florian Monetti, Brett Beamsley, Peter McComb, Simon Weppe

Abstract:

Land reclamation from the ocean has considerably increased over past decades to support worldwide rapid urban growth. Reshaping the coastline, however, inevitably affects coastal systems. One of the main challenges for coastal oceanographers is to predict the physical and morphological responses for nearshore systems to man-made changes over multiple time-scales. Fully-coupled numerical models are powerful tools for simulating the wide range of interactions between flow field and bedform morphology. Restricted and inconsistent measurements, combined with limited computational resources, typically make this exercise complex and uncertain. In the present study, we investigate the impact of proposed land reclamation within a wave-dominated bay in New Zealand. For this purpose, we first calibrated our morphological model based on the long-term evolution of the bay resulting from land reclamation carried out in the 1950s. This included the application of sedimentological spin-up and reduction techniques based on historical bathymetry datasets. The updated bathymetry, including the proposed modifications of the bay, was then used to predict the effect of the proposed land reclamation on the wave climate and morphology of the bay after one decade. We show that reshaping the bay induces a distinct symmetrical response of the shoreline which likely will modify the nearshore wave patterns and consequently recreational activities in the area.

Keywords: coastal waves, impact of land reclamation, long-term coastal evolution, morphodynamic modeling

Procedia PDF Downloads 174
598 Comparative Mesh Sensitivity Study of Different Reynolds Averaged Navier Stokes Turbulence Models in OpenFOAM

Authors: Zhuoneng Li, Zeeshan A. Rana, Karl W. Jenkins

Abstract:

In industry, to validate a case, often a multitude of simulation are required and in order to demonstrate confidence in the process where users tend to use a coarser mesh. Therefore, it is imperative to establish the coarsest mesh that could be used while keeping reasonable simulation accuracy. To date, the two most reliable, affordable and broadly used advanced simulations are the hybrid RANS (Reynolds Averaged Navier Stokes)/LES (Large Eddy Simulation) and wall modelled LES. The potentials in these two simulations will still be developed in the next decades mainly because the unaffordable computational cost of a DNS (Direct Numerical Simulation). In the wall modelled LES, the turbulence model is applied as a sub-grid scale model in the most inner layer near the wall. The RANS turbulence models cover the entire boundary layer region in a hybrid RANS/LES (Detached Eddy Simulation) and its variants, therefore, the RANS still has a very important role in the state of art simulations. This research focuses on the turbulence model mesh sensitivity analysis where various turbulence models such as the S-A (Spalart-Allmaras), SSG (Speziale-Sarkar-Gatski), K-Omega transitional SST (Shear Stress Transport), K-kl-Omega, γ-Reθ transitional model, v2f are evaluated within the OpenFOAM. The simulations are conducted on a fully developed turbulent flow over a flat plate where the skin friction coefficient as well as velocity profiles are obtained to compare against experimental values and DNS results. A concrete conclusion is made to clarify the mesh sensitivity for different turbulence models.

Keywords: mesh sensitivity, turbulence models, OpenFOAM, RANS

Procedia PDF Downloads 261
597 Effect of Mach Number for Gust-Airfoil Interatcion Noise

Authors: ShuJiang Jiang

Abstract:

The interaction of turbulence with airfoil is an important noise source in many engineering fields, including helicopters, turbofan, and contra-rotating open rotor engines, where turbulence generated in the wake of upstream blades interacts with the leading edge of downstream blades and produces aerodynamic noise. One approach to study turbulence-airfoil interaction noise is to model the oncoming turbulence as harmonic gusts. A compact noise source produces a dipole-like sound directivity pattern. However, when the acoustic wavelength is much smaller than the airfoil chord length, the airfoil needs to be treated as a non-compact source, and the gust-airfoil interaction becomes more complicated and results in multiple lobes generated in the radiated sound directivity. Capturing the short acoustic wavelength is a challenge for numerical simulations. In this work, simulations are performed for gust-airfoil interaction at different Mach numbers, using a high-fidelity direct Computational AeroAcoustic (CAA) approach based on a spectral/hp element method, verified by a CAA benchmark case. It is found that the squared sound pressure varies approximately as the 5th power of Mach number, which changes slightly with the observer location. This scaling law can give a better sound prediction than the flat-plate theory for thicker airfoils. Besides, another prediction method, based on the flat-plate theory and CAA simulation, has been proposed to give better predictions than the scaling law for thicker airfoils.

Keywords: aeroacoustics, gust-airfoil interaction, CFD, CAA

Procedia PDF Downloads 78
596 Study on the Impact of Windows Location on Occupancy Thermal Comfort by Computational Fluid Dynamics (CFD) Simulation

Authors: Farhan E Shafrin, Khandaker Shabbir Ahmed

Abstract:

Natural ventilation strategies continue to be a key alternative to costly mechanical ventilation systems, especially in healthcare facilities, due to increasing energy issues in developing countries, including Bangladesh. Besides, overcrowding and insufficient ventilation strategies remain significant causes of thermal discomfort and hospital infection in Bangladesh. With the proper location of inlet and outlet windows, uniform flow is possible in the occupancy area to achieve thermal comfort. It also determines the airflow pattern of the ward that decreases the movement of the contaminated air. This paper aims to establish a relationship between the location of the windows and the thermal comfort of the occupants in a naturally ventilated hospital ward. It defines the openings and ventilation variables that are interrelated in a way that enhances or limits the health and thermal comfort of occupants. The study conducts a full-scale experiment in one of the naturally ventilated wards in a primary health care hospital in Manikganj, Dhaka. CFD simulation is used to explore the performance of various opening positions in ventilation efficiency and thermal comfort in the study area. The results indicate that the opening located in the hospital ward has a significant impact on the thermal comfort of the occupants and the airflow pattern inside the ward. The findings can contribute to design the naturally ventilated hospital wards by identifying and predicting future solutions when it comes to relationships with the occupants' thermal comforts.

Keywords: CFD simulation, hospital ward, natural ventilation, thermal comfort, window location

Procedia PDF Downloads 196
595 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution

Authors: Pitigalage Chamath Chandira Peiris

Abstract:

A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.

Keywords: single image super resolution, computer vision, vision transformers, image restoration

Procedia PDF Downloads 105
594 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels

Authors: Tal Remez, Or Litany, Alex Bronstein

Abstract:

The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.

Keywords: binary pixels, maximum likelihood, neural networks, sparse coding

Procedia PDF Downloads 201
593 Arithmetic Operations Based on Double Base Number Systems

Authors: K. Sanjayani, C. Saraswathy, S. Sreenivasan, S. Sudhahar, D. Suganya, K. S. Neelukumari, N. Vijayarangan

Abstract:

Double Base Number System (DBNS) is an imminent system of representing a number using two bases namely 2 and 3, which has its application in Elliptic Curve Cryptography (ECC) and Digital Signature Algorithm (DSA).The previous binary method representation included only base 2. DBNS uses an approximation algorithm namely, Greedy Algorithm. By using this algorithm, the number of digits required to represent a larger number is less when compared to the standard binary method that uses base 2 algorithms. Hence, the computational speed is increased and time being reduced. The standard binary method uses binary digits 0 and 1 to represent a number whereas the DBNS method uses binary digit 1 alone to represent any number (canonical form). The greedy algorithm uses two ways to represent the number, one is by using only the positive summands and the other is by using both positive and negative summands. In this paper, arithmetic operations are used for elliptic curve cryptography. Elliptic curve discrete logarithm problem is the foundation for most of the day to day elliptic curve cryptography. This appears to be a momentous hard slog compared to digital logarithm problem. In elliptic curve digital signature algorithm, the key generation requires 160 bit of data by usage of standard binary representation. Whereas, the number of bits required generating the key can be reduced with the help of double base number representation. In this paper, a new technique is proposed to generate key during encryption and extraction of key in decryption.

Keywords: cryptography, double base number system, elliptic curve cryptography, elliptic curve digital signature algorithm

Procedia PDF Downloads 396
592 A Mathematical Study of Magnetic Field, Heat Transfer and Brownian Motion of Nanofluid over a Nonlinear Stretching Sheet

Authors: Madhu Aneja, Sapna Sharma

Abstract:

Thermal conductivity of ordinary heat transfer fluids is not adequate to meet today’s cooling rate requirements. Nanoparticles have been shown to increase the thermal conductivity and convective heat transfer to the base fluids. One of the possible mechanisms for anomalous increase in the thermal conductivity of nanofluids is the Brownian motions of the nanoparticles in the basefluid. In this paper, the natural convection of incompressible nanofluid over a nonlinear stretching sheet in the presence of magnetic field is studied. The flow and heat transfer induced by stretching sheets is important in the study of extrusion processes and is a subject of considerable interest in the contemporary literature. Appropriate similarity variables are used to transform the governing nonlinear partial differential equations to a system of nonlinear ordinary (similarity) differential equations. For computational purpose, Finite Element Method is used. The effective thermal conductivity and viscosity of nanofluid are calculated by KKL (Koo – Klienstreuer – Li) correlation. In this model effect of Brownian motion on thermal conductivity is considered. The effect of important parameter i.e. nonlinear parameter, volume fraction, Hartmann number, heat source parameter is studied on velocity and temperature. Skin friction and heat transfer coefficients are also calculated for concerned parameters.

Keywords: Brownian motion, convection, finite element method, magnetic field, nanofluid, stretching sheet

Procedia PDF Downloads 218
591 Prediction of the Aerodynamic Stall of a Helicopter’s Main Rotor Using a Computational Fluid Dynamics Analysis

Authors: Assel Thami Lahlou, Soufiane Stouti, Ismail Lagrat, Hamid Mounir, Oussama Bouazaoui

Abstract:

The purpose of this research work is to predict the helicopter from stalling by finding the minimum and maximum values that the pitch angle can take in order to fly in a hover state condition. The stall of a helicopter in hover occurs when the pitch angle is too small to generate the thrust required to support its weight or when the critical angle of attack that gives maximum lift is reached or exceeded. In order to find the minimum pitch angle, a 3D CFD simulation was done in this work using ANSYS FLUENT as the CFD solver. We started with a small value of the pitch angle θ, and we kept increasing its value until we found the thrust coefficient required to fly in a hover state and support the weight of the helicopter. For the CFD analysis, the Multiple Reference Frame (MRF) method with k-ε turbulent model was used to study the 3D flow around the rotor for θmin. On the other hand, a 2D simulation of the airfoil NACA 0012 was executed with a velocity inlet Vin=ΩR/2 to visualize the flow at the location span R/2 of the disk rotor using the Spallart-Allmaras turbulent model. Finding the critical angle of attack at this position will give us the ability to predict the stall in hover flight. The results obtained will be exposed later in the article. This study was so useful in analyzing the limitations of the helicopter’s main rotor and thus, in predicting accidents that can lead to a lot of damage.

Keywords: aerodynamic, CFD, helicopter, stall, blades, main rotor, minimum pitch angle, maximum pitch angle

Procedia PDF Downloads 81
590 A Study of Laminar Natural Convection in Annular Spaces between Differentially Heated Horizontal Circular Cylinders Filled with Non-Newtonian Nano Fluids

Authors: Behzad Ahdiharab, Senol Baskaya, Tamer Calisir

Abstract:

Heat exchangers are one of the most widely used systems in factories, refineries etc. In this study, natural convection heat transfer using nano-fluids in between two cylinders is numerically investigated. The inner and outer cylinders are kept at constant temperatures. One of the most important assumptions in the project is that the working fluid is non-Newtonian. In recent years, the use of nano-fluids in industrial applications has increased profoundly. In this study, nano-Newtonian fluids containing metal particles with high heat transfer coefficients have been used. All fluid properties such as homogeneity has been calculated. In the present study, solutions have been obtained under unsteady conditions, base fluid was water, and effects of various parameters on heat transfer have been investigated. These parameters are Rayleigh number (103 < Ra < 106), power-law index (0.6 < n < 1.4), aspect ratio (0 < AR < 0.8), nano-particle composition, horizontal and vertical displacement of the inner cylinder, rotation of the inner cylinder, and volume fraction of nanoparticles. Results such as the internal cylinder average and local Nusselt number variations, contours of temperature, flow lines are presented. The results are also discussed in detail. From the validation study performed it was found that a very good agreement exists between the present results and those from the open literature. It was found out that the heat transfer is always affected by the investigated parameters. However, the degree to which the heat transfer is affected does change in a wide range.

Keywords: heat transfer, circular space, non-Newtonian, nano fluid, computational fluid dynamics.

Procedia PDF Downloads 415
589 Low Complexity Carrier Frequency Offset Estimation for Cooperative Orthogonal Frequency Division Multiplexing Communication Systems without Cyclic Prefix

Authors: Tsui-Tsai Lin

Abstract:

Cooperative orthogonal frequency division multiplexing (OFDM) transmission, which possesses the advantages of better connectivity, expanded coverage, and resistance to frequency selective fading, has been a more powerful solution for the physical layer in wireless communications. However, such a hybrid scheme suffers from the carrier frequency offset (CFO) effects inherited from the OFDM-based systems, which lead to a significant degradation in performance. In addition, insertion of a cyclic prefix (CP) at each symbol block head for combating inter-symbol interference will lead to a reduction in spectral efficiency. The design on the CFO estimation for the cooperative OFDM system without CP is a suspended problem. This motivates us to develop a low complexity CFO estimator for the cooperative OFDM decode-and-forward (DF) communication system without CP over the multipath fading channel. Especially, using a block-type pilot, the CFO estimation is first derived in accordance with the least square criterion. A reliable performance can be obtained through an exhaustive two-dimensional (2D) search with a penalty of heavy computational complexity. As a remedy, an alternative solution realized with an iteration approach is proposed for the CFO estimation. In contrast to the 2D-search estimator, the iterative method enjoys the advantage of the substantially reduced implementation complexity without sacrificing the estimate performance. Computer simulations have been presented to demonstrate the efficacy of the proposed CFO estimation.

Keywords: cooperative transmission, orthogonal frequency division multiplexing (OFDM), carrier frequency offset, iteration

Procedia PDF Downloads 265
588 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows

Authors: J. P. Panda, K. Sasmal, H. V. Warrior

Abstract:

Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.

Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD

Procedia PDF Downloads 202
587 Adsorption and Selective Determination Ametryne in Food Sample Using of Magnetically Separable Molecular Imprinted Polymers

Authors: Sajjad Hussain, Sabir Khan, Maria Del Pilar Taboada Sotomayor

Abstract:

This work demonstrates the synthesis of magnetic molecularly imprinted polymers (MMIPs) for determination of a selected pesticide (ametryne) using high performance liquid chromatography (HPLC). Computational simulation can assist the choice of the most suitable monomer for the synthesis of polymers. The (MMIPs) were polymerized at the surface of Fe3O4@SiO2 magnetic nanoparticles (MNPs) using 2-vinylpyradine as functional monomer, ethylene-glycol-dimethacrylate (EGDMA) is a cross-linking agent and 2,2-Azobisisobutyronitrile (AIBN) used as radical initiator. Magnetic non-molecularly imprinted polymer (MNIPs) was also prepared under the same conditions without analyte. The MMIPs were characterized by scanning electron microscopy (SEM), Brunauer, Emmett and Teller (BET) and Fourier transform infrared spectroscopy (FTIR). Pseudo first order and pseudo second order model were applied to study kinetics of adsorption and it was found that adsorption process followed the pseudo first order kinetic model. Adsorption equilibrium data was fitted to Freundlich and Langmuir isotherms and the sorption equilibrium process was well described by Langmuir isotherm mode. The selectivity coefficients (α) of MMIPs for ametryne with respect to atrazine, ciprofloxacin and folic acid were 4.28, 12.32, and 14.53 respectively. The spiked recoveries ranged between 91.33 and 106.80% were obtained. The results showed high affinity and selectivity of MMIPs for pesticide ametryne in the food samples.

Keywords: molecularly imprinted polymer, pesticides, magnetic nanoparticles, adsorption

Procedia PDF Downloads 486
586 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 33