Search results for: pairing computation
134 A Geometrical Multiscale Approach to Blood Flow Simulation: Coupling 2-D Navier-Stokes and 0-D Lumped Parameter Models
Authors: Azadeh Jafari, Robert G. Owens
Abstract:
In this study, a geometrical multiscale approach which means coupling together the 2-D Navier-Stokes equations, constitutive equations and 0-D lumped parameter models is investigated. A multiscale approach, suggest a natural way of coupling detailed local models (in the flow domain) with coarser models able to describe the dynamics over a large part or even the whole cardiovascular system at acceptable computational cost. In this study we introduce a new velocity correction scheme to decouple the velocity computation from the pressure one. To evaluate the capability of our new scheme, a comparison between the results obtained with Neumann outflow boundary conditions on the velocity and Dirichlet outflow boundary conditions on the pressure and those obtained using coupling with the lumped parameter model has been performed. Comprehensive studies have been done based on the sensitivity of numerical scheme to the initial conditions, elasticity and number of spectral modes. Improvement of the computational algorithm with stable convergence has been demonstrated for at least moderate Weissenberg number. We comment on mathematical properties of the reduced model, its limitations in yielding realistic and accurate numerical simulations, and its contribution to a better understanding of microvascular blood flow. We discuss the sophistication and reliability of multiscale models for computing correct boundary conditions at the outflow boundaries of a section of the cardiovascular system of interest. In this respect the geometrical multiscale approach can be regarded as a new method for solving a class of biofluids problems, whose application goes significantly beyond the one addressed in this work.Keywords: geometrical multiscale models, haemorheology model, coupled 2-D navier-stokes 0-D lumped parameter modeling, computational fluid dynamics
Procedia PDF Downloads 361133 Quantitative Analysis of Nutrient Inflow from River and Groundwater to Imazu Bay in Fukuoka, Japan
Authors: Keisuke Konishi, Yoshinari Hiroshiro, Kento Terashima, Atsushi Tsutsumi
Abstract:
Imazu Bay plays an important role for endangered species such as horseshoe crabs and black-faced spoonbills that stay in the bay for spawning or the passing of winter. However, this bay is semi-enclosed with slow water exchange, which could lead to eutrophication under the condition of excess nutrient inflow to the bay. Therefore, quantification of nutrient inflow is of great importance. Generally, analysis of nutrient inflow to the bays takes into consideration nutrient inflow from only the river, but that from groundwater should not be ignored for more accurate results. The main objective of this study is to estimate the amounts of nutrient inflow from river and groundwater to Imazu Bay by analyzing water budget in Zuibaiji River Basin and loads of T-N, T-P, NO3-N and NH4-N. The water budget computation in the basin is performed using groundwater recharge model and quasi three-dimensional two-phase groundwater flow model, and the multiplication of the measured amount of nutrient inflow with the computed discharge gives the total amount of nutrient inflow to the bay. In addition, in order to evaluate nutrient inflow to the bay, the result is compared with nutrient inflow from geologically similar river basins. The result shows that the discharge is 3.50×107 m3/year from the river and 1.04×107 m3/year from groundwater. The submarine groundwater discharge accounts for approximately 23 % of the total discharge, which is large compared to the other river basins. It is also revealed that the total nutrient inflow is not particularly large. The sum of NO3-N and NH4-N loadings from groundwater is less than 10 % of that from the river because of denitrification in groundwater. The Shin Seibu Sewage Treatment Plant located below the observation points discharges treated water of 15,400 m3/day and plans to increase it. However, the loads of T-N and T-P from the treatment plant are 3.9 mg/L and 0.19 mg/L, so that it does not contribute a lot to eutrophication.Keywords: Eutrophication, groundwater recharge model, nutrient inflow, quasi three-dimensional two-phase groundwater flow model, submarine groundwater discharge
Procedia PDF Downloads 454132 On the Solution of Fractional-Order Dynamical Systems Endowed with Block Hybrid Methods
Authors: Kizito Ugochukwu Nwajeri
Abstract:
This paper presents a distinct approach to solving fractional dynamical systems using hybrid block methods (HBMs). Fractional calculus extends the concept of derivatives and integrals to non-integer orders and finds increasing application in fields such as physics, engineering, and finance. However, traditional numerical techniques often struggle to accurately capture the complex behaviors exhibited by these systems. To address this challenge, we develop HBMs that integrate single-step and multi-step methods, enabling the simultaneous computation of multiple solution points while maintaining high accuracy. Our approach employs polynomial interpolation and collocation techniques to derive a system of equations that effectively models the dynamics of fractional systems. We also directly incorporate boundary and initial conditions into the formulation, enhancing the stability and convergence properties of the numerical solution. An adaptive step-size mechanism is introduced to optimize performance based on the local behavior of the solution. Extensive numerical simulations are conducted to evaluate the proposed methods, demonstrating significant improvements in accuracy and efficiency compared to traditional numerical approaches. The results indicate that our hybrid block methods are robust and versatile, making them suitable for a wide range of applications involving fractional dynamical systems. This work contributes to the existing literature by providing an effective numerical framework for analyzing complex behaviors in fractional systems, thereby opening new avenues for research and practical implementation across various disciplines.Keywords: fractional calculus, numerical simulation, stability and convergence, Adaptive step-size mechanism, collocation methods
Procedia PDF Downloads 43131 An Unbiased Profiling of Immune Repertoire via Sequencing and Analyzing T-Cell Receptor Genes
Authors: Yi-Lin Chen, Sheng-Jou Hung, Tsunglin Liu
Abstract:
Adaptive immune system recognizes a wide range of antigens via expressing a large number of structurally distinct T cell and B cell receptor genes. The distinct receptor genes arise from complex rearrangements called V(D)J recombination, and constitute the immune repertoire. A common method of profiling immune repertoire is via amplifying recombined receptor genes using multiple primers and high-throughput sequencing. This multiplex-PCR approach is efficient; however, the resulting repertoire can be distorted because of primer bias. To eliminate primer bias, 5’ RACE is an alternative amplification approach. However, the application of RACE approach is limited by its low efficiency (i.e., the majority of data are non-regular receptor sequences, e.g., containing intronic segments) and lack of the convenient tool for analysis. We propose a computational tool that can correctly identify non-regular receptor sequences in RACE data via aligning receptor sequences against the whole gene instead of only the exon regions as done in all other tools. Using our tool, the remaining regular data allow for an accurate profiling of immune repertoire. In addition, a RACE approach is improved to yield a higher fraction of regular T-cell receptor sequences. Finally, we quantify the degree of primer bias of a multiplex-PCR approach via comparing it to the RACE approach. The results reveal significant differences in frequency of VJ combination by the two approaches. Together, we provide a new experimental and computation pipeline for an unbiased profiling of immune repertoire. As immune repertoire profiling has many applications, e.g., tracing bacterial and viral infection, detection of T cell lymphoma and minimal residual disease, monitoring cancer immunotherapy, etc., our work should benefit scientists who are interested in the applications.Keywords: immune repertoire, T-cell receptor, 5' RACE, high-throughput sequencing, sequence alignment
Procedia PDF Downloads 194130 Lightweight and Seamless Distributed Scheme for the Smart Home
Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro
Abstract:
Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.Keywords: authentication, key-session, security, wireless sensors
Procedia PDF Downloads 317129 Economical Transformer Selection Implementing Service Lifetime Cost
Authors: Bonginkosi A. Thango, Jacobus A. Jordaan, Agha F. Nnachi
Abstract:
In this day and age, there is a proliferate concern from all governments across the globe to barricade the environment from greenhouse gases, which absorb infrared radiation. As a result, solar photovoltaic (PV) electricity has been an expeditiously growing renewable energy source and will eventually undertake a prominent role in the global energy generation. The selection and purchasing of energy-efficient transformers that meet the operational requirements of the solar photovoltaic energy generation plants then become a part of the Independent Power Producers (IPP’s) investment plan of action. Taking these into account, this paper proposes a procedure that put into effect the intricate financial analysis necessitated to precisely evaluate the transformer service lifetime no-load and load loss factors. This procedure correctly set forth the transformer service lifetime loss factors as a result of a solar PV plant’s sporadic generation profile and related levelized costs of electricity into the computation of the transformer’s total ownership cost. The results are then critically compared with the conventional transformer total ownership cost unaccompanied by the emission costs, and demonstrate the significance of the sporadic energy generation nature of the solar PV plant on the total ownership cost. The findings indicate that the latter play a crucial role for developers and Independent Power Producers (IPP’s) in making the purchase decision during a tender bid where competing offers from different transformer manufactures are evaluated. Additionally, the susceptibility analysis of different factors engrossed in the transformer service lifetime cost is carried out; factors including the levelized cost of electricity, solar PV plant’s generation modes, and the loading profile are examined.Keywords: solar photovoltaic plant, transformer, total ownership cost, loss factors
Procedia PDF Downloads 130128 Direct Approach in Modeling Particle Breakage Using Discrete Element Method
Authors: Ebrahim Ghasemi Ardi, Ai Bing Yu, Run Yu Yang
Abstract:
Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.Keywords: particle bed, breakage models, breakage kinetic, discrete element method
Procedia PDF Downloads 199127 Human Leukocyte Antigen Class 1 Phenotype Distribution and Analysis in Persons from Central Uganda with Active Tuberculosis and Latent Mycobacterium tuberculosis Infection
Authors: Helen K. Buteme, Rebecca Axelsson-Robertson, Moses L. Joloba, Henry W. Boom, Gunilla Kallenius, Markus Maeurer
Abstract:
Background: The Ugandan population is heavily affected by infectious diseases and Human leukocyte antigen (HLA) diversity plays a crucial role in the host-pathogen interaction and affects the rates of disease acquisition and outcome. The identification of HLA class 1 alleles and determining which alleles are associated with tuberculosis (TB) outcomes would help in screening individuals in TB endemic areas for susceptibility to TB and to predict resistance or progression to TB which would inevitably lead to better clinical management of TB. Aims: To be able to determine the HLA class 1 phenotype distribution in a Ugandan TB cohort and to establish the relationship between these phenotypes and active and latent TB. Methods: Blood samples were drawn from 32 HIV negative individuals with active TB and 45 HIV negative individuals with latent MTB infection. DNA was extracted from the blood samples and the DNA samples HLA typed by the polymerase chain reaction-sequence specific primer method. The allelic frequencies were determined by direct count. Results: HLA-A*02, A*01, A*74, A*30, B*15, B*58, C*07, C*03 and C*04 were the dominant phenotypes in this Ugandan cohort. There were differences in the distribution of HLA types between the individuals with active TB and the individuals with LTBI with only HLA-A*03 allele showing a statistically significant difference (p=0.0136). However, after FDR computation the corresponding q-value is above the expected proportion of false discoveries (q-value 0.2176). Key findings: We identified a number of HLA class I alleles in a population from Central Uganda which will enable us to carry out a functional characterization of CD8+ T-cell mediated immune responses to MTB. Our results also suggest that there may be a positive association between the HLA-A*03 allele and TB implying that individuals with the HLA-A*03 allele are at a higher risk of developing active TB.Keywords: HLA, phenotype, tuberculosis, Uganda
Procedia PDF Downloads 403126 Comparison of Modulus from Repeated Plate Load Test and Resonant Column Test for Compaction Control of Trackbed Foundation
Authors: JinWoog Lee, SeongHyeok Lee, ChanYong Choi, Yujin Lim, Hojin Cho
Abstract:
Primary function of the trackbed in a conventional railway track system is to decrease the stresses in the subgrade to be in an acceptable level. A properly designed trackbed layer performs this task adequately. Many design procedures have used assumed and/or are based on critical stiffness values of the layers obtained mostly in the field to calculate an appropriate thickness of the sublayers of the trackbed foundation. However, those stiffness values do not consider strain levels clearly and precisely in the layers. This study proposes a method of computation of stiffness that can handle with strain level in the layers of the trackbed foundation in order to provide properly selected design values of the stiffness of the layers. The shear modulus values are dependent on shear strain level so that the strain levels generated in the subgrade in the trackbed under wheel loading and below plate of Repeated Plate Bearing Test (RPBT) are investigated by finite element analysis program ABAQUS and PLAXIS programs. The strain levels generated in the subgrade from RPBT are compared to those values from RC (Resonant Column) test after some consideration of strain levels and stress consideration. For comparison of shear modulus G obtained from RC test and stiffness moduli Ev2 obtained from RPBT in the field, many numbers of mid-size RC tests in laboratory and RPBT in field were performed extensively. It was found in this study that there is a big difference in stiffness modulus when the converted Ev2 values were compared to those values of RC test. It is verified in this study that it is necessary to use precise and increased loading steps to construct nonlinear curves from RPBT in order to get correct Ev2 values in proper strain levels.Keywords: modulus, plate load test, resonant column test, trackbed foundation
Procedia PDF Downloads 495125 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine
Procedia PDF Downloads 176124 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 152123 Multivariate Analysis on Water Quality Attributes Using Master-Slave Neural Network Model
Authors: A. Clementking, C. Jothi Venkateswaran
Abstract:
Mathematical and computational functionalities such as descriptive mining, optimization, and predictions are espoused to resolve natural resource planning. The water quality prediction and its attributes influence determinations are adopted optimization techniques. The water properties are tainted while merging water resource one with another. This work aimed to predict influencing water resource distribution connectivity in accordance to water quality and sediment using an innovative proposed master-slave neural network back-propagation model. The experiment results are arrived through collecting water quality attributes, computation of water quality index, design and development of neural network model to determine water quality and sediment, master–slave back propagation neural network back-propagation model to determine variations on water quality and sediment attributes between the water resources and the recommendation for connectivity. The homogeneous and parallel biochemical reactions are influences water quality and sediment while distributing water from one location to another. Therefore, an innovative master-slave neural network model [M (9:9:2)::S(9:9:2)] designed and developed to predict the attribute variations. The result of training dataset given as an input to master model and its maximum weights are assigned as an input to the slave model to predict the water quality. The developed master-slave model is predicted physicochemical attributes weight variations for 85 % to 90% of water quality as a target values.The sediment level variations also predicated from 0.01 to 0.05% of each water quality percentage. The model produced the significant variations on physiochemical attribute weights. According to the predicated experimental weight variation on training data set, effective recommendations are made to connect different resources.Keywords: master-slave back propagation neural network model(MSBPNNM), water quality analysis, multivariate analysis, environmental mining
Procedia PDF Downloads 477122 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 136121 Free Energy Computation of A G-Quadruplex-Ligand Structure: A Classical Molecular Dynamics and Metadynamics Simulation Study
Authors: Juan Antonio Mondragon Sanchez, Ruben Santamaria
Abstract:
The DNA G-quadruplex is a four-stranded DNA structure formed by stacked planes of four base paired guanines (G-quartet). Guanine rich DNA sequences appear in many sites of genomic DNA and can potential form G-quadruplexes, such as those occurring at 3'-terminus of the human telomeric DNA. The formation and stabilization of a G-quadruplex by small ligands at the telomeric region can inhibit the telomerase activity. In turn, the ligands can be used to down regulate oncogene expression making G-quadruplex an attractive target for anticancer therapy. Many G-quadruplex ligands have been proposed with a planar core to facilitate the pi–pi stacking and electrostatic interactions with the G-quartets. However, many drug candidates are impossibilitated to discriminate a G-quadruplex from a double helix DNA structure. In this context, it is important to investigate the site topology for the interaction of a G-quadruplex with a ligand. In this work, we determine the free energy surface of a G-quadruplex-ligand to study the binding modes of the G-quadruplex (TG4T) with the daunomycin (DM) drug. The complex TG4T-DM is studied using classical molecular dynamics in combination with metadynamics simulations. The metadynamics simulations permit an enhanced sampling of the conformational space with a modest computational cost and obtain free energy surfaces in terms of the collective variables (CV). The free energy surfaces of TG4T-DM exhibit other local minima, indicating the presence of additional binding modes of daunomycin that are not observed in short MD simulations without the metadynamics approach. The results are compared with similar calculations on a different structure (the mutated mu-G4T-DM where the 5' thymines on TG4T-DM have been deleted). The results should be of help to design new G-quadruplex drugs, and understand the differences in the recognition topology sites of the duplex and quadruplex DNA structures in their interaction with ligands.Keywords: g-quadruplex, cancer, molecular dynamics, metadynamics
Procedia PDF Downloads 459120 Effects of Computer Aided Instructional Package on Performance and Retention of Genetic Concepts amongst Secondary School Students in Niger State, Nigeria
Authors: Muhammad R. Bello, Mamman A. Wasagu, Yahya M. Kamar
Abstract:
The study investigated the effects of computer-aided instructional package (CAIP) on performance and retention of genetic concepts among secondary school students in Niger State. Quasi-experimental research design i.e. pre-test-post-test experimental and control groups were adopted for the study. The population of the study was all senior secondary school three (SS3) students’ offering biology. A sample of 223 students was randomly drawn from six purposively selected secondary schools. The researchers’ developed computer aided instructional package (CAIP) on genetic concepts was used as treatment instrument for the experimental group while the control group was exposed to the conventional lecture method (CLM). The instrument for data collection was a Genetic Performance Test (GEPET) that had 50 multiple-choice questions which were validated by science educators. A Reliability coefficient of 0.92 was obtained for GEPET using Pearson Product Moment Correlation (PPMC). The data collected were analyzed using IBM SPSS Version 20 package for computation of Means, Standard deviation, t-test, and analysis of covariance (ANCOVA). The ANOVA analysis (Fcal (220) = 27.147, P < 0.05) shows that students who received instruction with CAIP outperformed the students who received instruction with CLM and also had higher retention. The findings also revealed no significant difference in performance and retention between male and female students (tcal (103) = -1.429, P > 0.05). It was recommended amongst others that teachers should use computer-aided instructional package in teaching genetic concepts in order to improve students’ performance and retention in biology subject. Keywords: Computer-aided Instructional Package, Performance, Retention and Genetic Concepts.Keywords: computer aided instructional package, performance, retention, genetic concepts, senior secondary school students
Procedia PDF Downloads 362119 Evaluation of Chitin Filled Epoxy Coating for Corrosion Protection of Q235 Steel in Saline Environment
Authors: Innocent O. Arukalam, Emeka E. Oguzie
Abstract:
Interest in the development of eco-friendly anti-corrosion coatings using bio-based renewable materials is gaining momentum recently. To this effect, chitin biopolymer, which is non-toxic, biodegradable, and inherently possesses anti-microbial property, was successfully synthesized from snail shells and used as a filler in the preparation of epoxy coating. The chitin particles were characterized with contact angle goniometer, scanning electron microscope (SEM), Fourier transform infrared (FTIR) spectrophotometer, and X-ray diffractometer (XRD). The performance of the coatings was evaluated by immersion and electrochemical impedance spectroscopy (EIS) tests. Electronic structure properties of the coating ingredients and molecular level interaction of the corrodent and coated Q235 steel were appraised by quantum chemical computations (QCC) and molecular dynamics (MD) simulation techniques, respectively. The water contact angle (WCA) measurement of chitin particles was found to be 129.3o while that of chitin particles modified with amino trimethoxy silane (ATMS) was 149.6o, suggesting it is highly hydrophobic. Immersion and EIS analyses revealed that epoxy coating containing silane-modified chitin exhibited lowest water absorption and highest barrier as well as anti-corrosion performances. The QCC showed that quantum parameters for the coating containing silane-modified chitin are optimum and therefore corresponds to high corrosion protection. The high negative value of adsorption energies (Eads) for the coating containing silane-modified chitin indicates the coating molecules interacted and adsorbed strongly on the steel surface. The observed results have shown that silane-modified epoxy-chitin coating would perform satisfactorily for surface protection of metal structures in saline environment.Keywords: chitin, EIS, epoxy coating, hydrophobic, molecular dynamics simulation, quantum chemical computation
Procedia PDF Downloads 98118 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 186117 An Overview of Domain Models of Urban Quantitative Analysis
Authors: Mohan Li
Abstract:
Nowadays, intelligent research technology is more and more important than traditional research methods in urban research work, and this proportion will greatly increase in the next few decades. Frequently such analyzing work cannot be carried without some software engineering knowledge. And here, domain models of urban research will be necessary when applying software engineering knowledge to urban work. In many urban plan practice projects, making rational models, feeding reliable data, and providing enough computation all make indispensable assistance in producing good urban planning. During the whole work process, domain models can optimize workflow design. At present, human beings have entered the era of big data. The amount of digital data generated by cities every day will increase at an exponential rate, and new data forms are constantly emerging. How to select a suitable data set from the massive amount of data, manage and process it has become an ability that more and more planners and urban researchers need to possess. This paper summarizes and makes predictions of the emergence of technologies and technological iterations that may affect urban research in the future, discover urban problems, and implement targeted sustainable urban strategies. They are summarized into seven major domain models. They are urban and rural regional domain model, urban ecological domain model, urban industry domain model, development dynamic domain model, urban social and cultural domain model, urban traffic domain model, and urban space domain model. These seven domain models can be used to guide the construction of systematic urban research topics and help researchers organize a series of intelligent analytical tools, such as Python, R, GIS, etc. These seven models make full use of quantitative spatial analysis, machine learning, and other technologies to achieve higher efficiency and accuracy in urban research, assisting people in making reasonable decisions.Keywords: big data, domain model, urban planning, urban quantitative analysis, machine learning, workflow design
Procedia PDF Downloads 177116 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface
Authors: Suman Dutta, Manish Agrawal, C. S. Jog
Abstract:
Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method
Procedia PDF Downloads 274115 Numerical Simulation of Aeroelastic Influence Exerted by Kinematic and Geometrical Parameters on Oscillations' Frequencies and Phase Shift Angles in a Simulated Compressor of Gas Transmittal Unit
Authors: Liliia N. Butymova, Vladimir Y. Modorsky, Nikolai A. Shevelev
Abstract:
Prediction of vibration processes in gas transmittal units (GTU) is an urgent problem. Despite numerous scientific publications on the problem of vibrations in general, there are not enough works concerning FSI-modeling interaction processes between several deformable blades in gas-dynamic flow. Since it is very difficult to solve the problem in full scope, with all factors considered, a unidirectional dynamic coupled 1FSI model is suggested for use at the first stage, which would include, from symmetry considerations, two blades, which might be considered as the first stage of solving more general bidirectional problem. ANSYS CFX programmed multi-processor was chosen as a numerical computation tool. The problem was solved on PNRPU high-capacity computer complex. At the first stage of the study, blades were believed oscillating with the same frequency, although oscillation phases could be equal and could be different. At that non-stationary gas-dynamic forces distribution over the blades surfaces is calculated in run of simulation experiment. Oscillations in the “gas — structure” dynamic system are assumed to increase if the resultant of these gas-dynamic forces is in-phase with blade oscillation, and phase shift (φ=0). Provided these oscillation occur with phase shift, then oscillations might increase or decrease, depending on the phase shift value. The most important results are as follows: the angle of phase shift in inter-blade oscillation and the gas-dynamic force depends on the flow velocity, the specific inter-blade gap, and the shaft rotation speed; a phase shift in oscillation of adjacent blades does not always correspond to phase shift of gas-dynamic forces affecting the blades. Thus, it was discovered, that asynchronous oscillation of blades might cause either attenuation or intensification of oscillation. It was revealed that clocking effect might depend not only on the mutual circumferential displacement of blade rows and the gap between the blades, but also on the blade dynamic deformation nature.Keywords: aeroelasticity, ANSYS CFX, oscillation, phase shift, clocking effect, vibrations
Procedia PDF Downloads 269114 A Neural Network for the Prediction of Contraction after Burn Injuries
Authors: Ginger Egberts, Marianne Schaaphok, Fred Vermolen, Paul van Zuijlen
Abstract:
A few years ago, a promising morphoelastic model was developed for the simulation of contraction formation after burn injuries. Contraction can lead to a serious reduction in physical mobility, like a reduction in the range-of-motion of joints. If this is the case in a healing burn wound, then this is referred to as a contracture that needs medical intervention. The morphoelastic model consists of a set of partial differential equations describing both a chemical part and a mechanical part in dermal wound healing. These equations are solved with the numerical finite element method (FEM). In this method, many calculations are required on each of the chosen elements. In general, the more elements, the more accurate the solution. However, the number of elements increases rapidly if simulations are performed in 2D and 3D. In that case, it not only takes longer before a prediction is available, the computation also becomes more expensive. It is therefore important to investigate alternative possibilities to generate the same results, based on the input parameters only. In this study, a surrogate neural network has been designed to mimic the results of the one-dimensional morphoelastic model. The neural network generates predictions quickly, is easy to implement, and there is freedom in the choice of input and output. Because a neural network requires extensive training and a data set, it is ideal that the one-dimensional FEM code generates output quickly. These feed-forward-type neural network results are very promising. Not only can the network give faster predictions, but it also has a performance of over 99%. It reports on the relative surface area of the wound/scar, the total strain energy density, and the evolutions of the densities of the chemicals and mechanics. It is, therefore, interesting to investigate the applicability of a neural network for the two- and three-dimensional morphoelastic model for contraction after burn injuries.Keywords: biomechanics, burns, feasibility, feed-forward NN, morphoelasticity, neural network, relative surface area wound
Procedia PDF Downloads 55113 Application of a Lighting Design Method Using Mean Room Surface Exitance
Authors: Antonello Durante, James Duff, Kevin Kelly
Abstract:
The visual needs of people in modern work based buildings are changing. Self-illuminated screens of computers, televisions, tablets and smart phones have changed the relationship between people and the lit environment. In the past, lighting design practice was primarily based on providing uniform horizontal illuminance on the working plane, but this has failed to ensure good quality lit environments. Lighting standards of today continue to be set based upon a 100 year old approach that at its core, considers the task illuminance of the utmost importance, with this task typically being located on a horizontal plane. An alternative method focused on appearance has been proposed, as opposed to the traditional performance based approach. Mean Room Surface Exitance (MRSE) and Target-Ambient Illuminance Ratio (TAIR) are two new metrics proposed to assess illumination adequacy in interiors. The hypothesis is that these factors will be superior to the existing metrics used, which are horizontal illuminance led. For the six past years, research has examined this, within the Dublin Institute of Technology, with a view to determining the suitability of this approach for application to general lighting practice. Since the start of this research, a number of key findings have been produced that centered on how occupants will react to various levels of MRSE. This paper provides a broad update on how this research has progressed. More specifically, this paper will: i) Demonstrate how MRSE can be measured using HDR images technology, ii) Illustrate how MRSE can be calculated using scripting and an open source lighting computation engine, iii) Describe experimental results that demonstrate how occupants have reacted to various levels of MRSE within experimental office environments.Keywords: illumination hierarchy (IH), mean room surface exitance (MRSE), perceived adequacy of illumination (PAI), target-ambient illumination ratio (TAIR)
Procedia PDF Downloads 187112 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images
Authors: Amit Kumar Happy
Abstract:
This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform
Procedia PDF Downloads 115111 Automation of Savitsky's Method for Power Calculation of High Speed Vessel and Generating Empirical Formula
Authors: M. Towhidur Rahman, Nasim Zaman Piyas, M. Sadiqul Baree, Shahnewaz Ahmed
Abstract:
The design of high-speed craft has recently become one of the most active areas of naval architecture. Speed increase makes these vehicles more efficient and useful for military, economic or leisure purpose. The planing hull is designed specifically to achieve relatively high speed on the surface of the water. Speed on the water surface is closely related to the size of the vessel and the installed power. The Savitsky method was first presented in 1964 for application to non-monohedric hulls and for application to stepped hulls. This method is well known as a reliable comparative to CFD analysis of hull resistance. A computer program based on Savitsky’s method has been developed using MATLAB. The power of high-speed vessels has been computed in this research. At first, the program reads some principal parameters such as displacement, LCG, Speed, Deadrise angle, inclination of thrust line with respect to keel line etc. and calculates the resistance of the hull using empirical planning equations of Savitsky. However, some functions used in the empirical equations are available only in the graphical form, which is not suitable for the automatic computation. We use digital plotting system to extract data from nomogram. As a result, value of wetted length-beam ratio and trim angle can be determined directly from the input of initial variables, which makes the power calculation automated without manually plotting of secondary variables such as p/b and other coefficients and the regression equations of those functions are derived by using data from different charts. Finally, the trim angle, mean wetted length-beam ratio, frictional coefficient, resistance, and power are computed and compared with the results of Savitsky and good agreement has been observed.Keywords: nomogram, planing hull, principal parameters, regression
Procedia PDF Downloads 404110 Computation and Validation of the Stress Distribution around a Circular Hole in a Slab Undergoing Plastic Deformation
Authors: Sherif D. El Wakil, John Rice
Abstract:
The aim of the current work was to employ the finite element method to model a slab, with a small hole across its width, undergoing plastic plane strain deformation. The computational model had, however, to be validated by comparing its results with those obtained experimentally. Since they were in good agreement, the finite element method can therefore be considered a reliable tool that can help gain better understanding of the mechanism of ductile failure in structural members having stress raisers. The finite element software used was ANSYS, and the PLANE183 element was utilized. It is a higher order 2-D, 8-node or 6-node element with quadratic displacement behavior. A bilinear stress-strain relationship was used to define the material properties, with constants similar to those of the material used in the experimental study. The model was run for several tensile loads in order to observe the progression of the plastic deformation region, and the stress concentration factor was determined in each case. The experimental study involved employing the visioplasticity technique, where a circular mesh (each circle was 0.5 mm in diameter, with 0.05 mm line thickness) was initially printed on the side of an aluminum slab having a small hole across its width. Tensile loading was then applied to produce a small increment of plastic deformation. Circles in the plastic region became ellipses, where the directions of the principal strains and stresses coincided with the major and minor axes of the ellipses. Next, we were able to determine the directions of the maximum and minimum shear stresses at the center of each ellipse, and the slip-line field was then constructed. We were then able to determine the stress at any point in the plastic deformation zone, and hence the stress concentration factor. The experimental results were found to be in good agreement with the analytical ones.Keywords: finite element method to model a slab, slab undergoing plastic deformation, stress distribution around a circular hole, visioplasticity
Procedia PDF Downloads 319109 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions
Authors: Valerii Dashuk
Abstract:
The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function
Procedia PDF Downloads 174108 Comparative Study of Skeletonization and Radial Distance Methods for Automated Finger Enumeration
Authors: Mohammad Hossain Mohammadi, Saif Al Ameri, Sana Ziaei, Jinane Mounsef
Abstract:
Automated enumeration of the number of hand fingers is widely used in several motion gaming and distance control applications, and is discussed in several published papers as a starting block for hand recognition systems. The automated finger enumeration technique should not only be accurate, but also must have a fast response for a moving-picture input. The high performance of video in motion games or distance control will inhibit the program’s overall speed, for image processing software such as Matlab need to produce results at high computation speeds. Since an automated finger enumeration with minimum error and processing time is desired, a comparative study between two finger enumeration techniques is presented and analyzed in this paper. In the pre-processing stage, various image processing functions were applied on a real-time video input to obtain the final cleaned auto-cropped image of the hand to be used for the two techniques. The first technique uses the known morphological tool of skeletonization to count the number of skeleton’s endpoints for fingers. The second technique uses a radial distance method to enumerate the number of fingers in order to obtain a one dimensional hand representation. For both discussed methods, the different steps of the algorithms are explained. Then, a comparative study analyzes the accuracy and speed of both techniques. Through experimental testing in different background conditions, it was observed that the radial distance method was more accurate and responsive to a real-time video input compared to the skeletonization method. All test results were generated in Matlab and were based on displaying a human hand for three different orientations on top of a plain color background. Finally, the limitations surrounding the enumeration techniques are presented.Keywords: comparative study, hand recognition, fingertip detection, skeletonization, radial distance, Matlab
Procedia PDF Downloads 382107 Arabic Lexicon Learning to Analyze Sentiment in Microblogs
Authors: Mahmoud B. Rokaya
Abstract:
The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation
Procedia PDF Downloads 188106 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 156105 Robust Numerical Solution for Flow Problems
Authors: Gregor Kosec
Abstract:
Simple and robust numerical approach for solving flow problems is presented, where involved physical fields are represented through the local approximation functions, i.e., the considered field is approximated over a local support domain. The approximation functions are then used to evaluate the partial differential operators. The type of approximation, the size of support domain, and the type and number of basis function can be general. The solution procedure is formulated completely through local computational operations. Besides local numerical method also the pressure velocity is performed locally with retaining the correct temporal transient. The complete locality of the introduced numerical scheme has several beneficial effects. One of the most attractive is the simplicity since it could be understood as a generalized Finite Differences Method, however, much more powerful. Presented methodology offers many possibilities for treating challenging cases, e.g. nodal adaptivity to address regions with sharp discontinuities or p-adaptivity to treat obscure anomalies in physical field. The stability versus computation complexity and accuracy can be regulated by changing number of support nodes, etc. All these features can be controlled on the fly during the simulation. The presented methodology is relatively simple to understand and implement, which makes it potentially powerful tool for engineering simulations. Besides simplicity and straightforward implementation, there are many opportunities to fully exploit modern computer architectures through different parallel computing strategies. The performance of the method is presented on the lid driven cavity problem, backward facing step problem, de Vahl Davis natural convection test, extended also to low Prandtl fluid and Darcy porous flow. Results are presented in terms of velocity profiles, convergence plots, and stability analyses. Results of all cases are also compared against published data.Keywords: fluid flow, meshless, low Pr problem, natural convection
Procedia PDF Downloads 233