Search results for: computational aeroacoustics
1441 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 2901440 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud
Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal
Abstract:
Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid
Procedia PDF Downloads 3181439 Numerical Solutions of Generalized Burger-Fisher Equation by Modified Variational Iteration Method
Authors: M. O. Olayiwola
Abstract:
Numerical solutions of the generalized Burger-Fisher are obtained using a Modified Variational Iteration Method (MVIM) with minimal computational efforts. The computed results with this technique have been compared with other results. The present method is seen to be a very reliable alternative method to some existing techniques for such nonlinear problems.Keywords: burger-fisher, modified variational iteration method, lagrange multiplier, Taylor’s series, partial differential equation
Procedia PDF Downloads 4301438 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 2021437 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic
Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx
Abstract:
Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM
Procedia PDF Downloads 2051436 Cognitive Dissonance in Robots: A Computational Architecture for Emotional Influence on the Belief System
Authors: Nicolas M. Beleski, Gustavo A. G. Lugo
Abstract:
Robotic agents are taking more and increasingly important roles in society. In order to make these robots and agents more autonomous and efficient, their systems have grown to be considerably complex and convoluted. This growth in complexity has led recent researchers to investigate forms to explain the AI behavior behind these systems in search for more trustworthy interactions. A current problem in explainable AI is the inner workings with the logic inference process and how to conduct a sensibility analysis of the process of valuation and alteration of beliefs. In a social HRI (human-robot interaction) setup, theory of mind is crucial to ease the intentionality gap and to achieve that we should be able to infer over observed human behaviors, such as cases of cognitive dissonance. One specific case inspired in human cognition is the role emotions play on our belief system and the effects caused when observed behavior does not match the expected outcome. In such scenarios emotions can make a person wrongly assume the antecedent P for an observed consequent Q, and as a result, incorrectly assert that P is true. This form of cognitive dissonance where an unproven cause is taken as truth induces changes in the belief base which can directly affect future decisions and actions. If we aim to be inspired by human thoughts in order to apply levels of theory of mind to these artificial agents, we must find the conditions to replicate these observable cognitive mechanisms. To achieve this, a computational architecture is proposed to model the modulation effect emotions have on the belief system and how it affects logic inference process and consequently the decision making of an agent. To validate the model, an experiment based on the prisoner's dilemma is currently under development. The hypothesis to be tested involves two main points: how emotions, modeled as internal argument strength modulators, can alter inference outcomes, and how can explainable outcomes be produced under specific forms of cognitive dissonance.Keywords: cognitive architecture, cognitive dissonance, explainable ai, sensitivity analysis, theory of mind
Procedia PDF Downloads 1321435 A Multistep Broyden’s-Type Method for Solving Systems of Nonlinear Equations
Authors: M. Y. Waziri, M. A. Aliyu
Abstract:
The paper proposes an approach to improve the performance of Broyden’s method for solving systems of nonlinear equations. In this work, we consider the information from two preceding iterates rather than a single preceding iterate to update the Broyden’s matrix that will produce a better approximation of the Jacobian matrix in each iteration. The numerical results verify that the proposed method has clearly enhanced the numerical performance of Broyden’s Method.Keywords: mulit-step Broyden, nonlinear systems of equations, computational efficiency, iterate
Procedia PDF Downloads 6381434 Numerical Evolution Methods of Rational Form for Diffusion Equations
Authors: Said Algarni
Abstract:
The purpose of this study was to investigate selected numerical methods that demonstrate good performance in solving PDEs. We adapted alternative method that involve rational polynomials. Padé time stepping (PTS) method, which is highly stable for the purposes of the present application and is associated with lower computational costs, was applied. Furthermore, PTS was modified for our study which focused on diffusion equations. Numerical runs were conducted to obtain the optimal local error control threshold.Keywords: Padé time stepping, finite difference, reaction diffusion equation, PDEs
Procedia PDF Downloads 2991433 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping
Authors: Masato Saeki
Abstract:
Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level
Procedia PDF Downloads 4531432 Automation of Finite Element Simulations for the Design Space Exploration and Optimization of Type IV Pressure Vessel
Authors: Weili Jiang, Simon Cadavid Lopera, Klaus Drechsler
Abstract:
Fuel cell vehicle has become the most competitive solution for the transportation sector in the hydrogen economy. Type IV pressure vessel is currently the most popular and widely developed technology for the on-board storage, based on their high reliability and relatively low cost. Due to the stringent requirement on mechanical performance, the pressure vessel is subject to great amount of composite material, a major cost driver for the hydrogen tanks. Evidently, the optimization of composite layup design shows great potential in reducing the overall material usage, yet requires comprehensive understanding on underlying mechanisms as well as the influence of different design parameters on mechanical performance. Given the type of materials and manufacturing processes by which the type IV pressure vessels are manufactured, the design and optimization are a nuanced subject. The manifold of stacking sequence and fiber orientation variation possibilities have an out-standing effect on vessel strength due to the anisotropic property of carbon fiber composites, which make the design space high dimensional. Each variation of design parameters requires computational resources. Using finite element analysis to evaluate different designs is the most common method, however, the model-ing, setup and simulation process can be very time consuming and result in high computational cost. For this reason, it is necessary to build a reliable automation scheme to set up and analyze the di-verse composite layups. In this research, the simulation process of different tank designs regarding various parameters is conducted and automatized in a commercial finite element analysis framework Abaqus. Worth mentioning, the modeling of the composite overwrap is automatically generated using an Abaqus-Python scripting interface. The prediction of the winding angle of each layer and corresponding thickness variation on dome region is the most crucial step of the modeling, which is calculated and implemented using analytical methods. Subsequently, these different composites layups are simulated as axisymmetric models to facilitate the computational complexity and reduce the calculation time. Finally, the results are evaluated and compared regarding the ultimate tank strength. By automatically modeling, evaluating and comparing various composites layups, this system is applicable for the optimization of the tanks structures. As mentioned above, the mechanical property of the pressure vessel is highly dependent on composites layup, which requires big amount of simulations. Consequently, to automatize the simulation process gains a rapid way to compare the various designs and provide an indication of the optimum one. Moreover, this automation process can also be operated for creating a data bank of layups and corresponding mechanical properties with few preliminary configuration steps for the further case analysis. Subsequently, using e.g. machine learning to gather the optimum by the data pool directly without the simulation process.Keywords: type IV pressure vessels, carbon composites, finite element analy-sis, automation of simulation process
Procedia PDF Downloads 1351431 RANS Simulation of the LNG Ship Squat in Shallow Water
Authors: Mehdi Nakisa, Adi Maimun, Yasser M. Ahmed, Fatemeh Behrouzi
Abstract:
Squat is the reduction in under-keel clearance between a vessel at-rest and underway due to the increased flow of water past the moving body. The forward motion of the ship induces a relative velocity between the ship and the surrounding water that causes a water level depression in which the ship sinks. The problem of ship squat is one among the crucial factors affecting the navigation of ships in restricted waters. This article investigates the LNG ship squat, its effects on flow streamlines around the ship hull and ship behavior and motion using computational fluid dynamics which is applied by Ansys-Fluent.Keywords: ship squat, CFD, confined, mechanic
Procedia PDF Downloads 6201430 Computational Analysis of Adaptable Winglets for Improved Morphing Aircraft Performance
Authors: Erdogan Kaygan, Alvin Gatto
Abstract:
An investigation of adaptable winglets for enhancing morphing aircraft performance is described in this paper. The concepts investigated consist of various winglet configurations fundamentally centered on a baseline swept wing. The impetus for the work was to identify and optimize winglets to enhance the aerodynamic efficiency of a morphing aircraft. All computations were performed with Athena Vortex Lattice modelling with varying degrees of twist and cant angle considered. The results from this work indicate that if adaptable winglets were employed on aircraft’s improvements in aircraft performance could be achieved.Keywords: aircraft, drag, twist, winglet
Procedia PDF Downloads 5841429 3D CFD Modelling of the Airflow and Heat Transfer in Cold Room Filled with Dates
Authors: Zina Ghiloufi, Tahar Khir
Abstract:
A transient three-dimensional computational fluid dynamics (CFD) model is developed to determine the velocity and temperature distribution in different positions cold room during pre-cooling of dates. The turbulence model used is the k-ω Shear Stress Transport (SST) with the standard wall function, the air. The numerical results obtained show that cooling rate is not uniform inside the room; the product at the medium of room has a slower cooling rate. This cooling heterogeneity has a large effect on the energy consumption during cold storage.Keywords: CFD, cold room, cooling rate, dDates, numerical simulation, k-ω (SST)
Procedia PDF Downloads 2351428 Computational Study of Composite Films
Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova
Abstract:
Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.Keywords: composite films, computer modelling, image analysis, nanocomposite films
Procedia PDF Downloads 3931427 Unveiling the Reaction Mechanism of N-Nitroso Dimethyl Amine Formation from Substituted Hydrazine Derivatives During Ozonation: A Computational Study
Authors: Rehin Sulay, Anandhu Krishna, Jintumol Mathew, Vibin Ipe Thomas
Abstract:
N-Nitrosodimethyl amine, the simplest member of the N-Nitrosoamine family, is a carcinogenic and mutagenic agent that has gained considerable research interest owing to its toxic nature. Ozonation of industrially important hydrazines such as unsymmetrical dimethylhydrazine (UDMH) or monomethylhydrazine (MMH) has been associated with NDMA formation and accumulation in the environment. UDMH/MMH - ozonation also leads to several other transformation products such as acetaldehyde dimethyl hydrazone (ADMH), tetramethyl tetra azene (TMT), diazomethane, methyl diazene, etc, which can be either precursors or competitors for NDMA formation.In this work, we explored the formation mechanism of ADMH and TMT from UDMH-ozonation and their further oxidation to NDMA using the second-order Moller Plesset perturbation theory employing the 6-311G(d) basis set. We have also investigated how MMH selectively forms methyl diazene and diazomethane under normal conditions and NDMA in the presence of excess ozone. Our calculations indicate that the reactions proceed via an initial H abstraction from the hydrazine –NH2 group followed by the oxidation of the generated N-radical species. The formation of ADMH from the UDMH-ozone reaction involves an acetaldehyde intermediate, which then reacts with a second UDMH molecule to generate ADMH. The preferable attack of ozone molecule on N=C bond of ADMH generates DMAN intermediate, which subsequently undergoes oxidation to form NDMA. Unlike other transformation products, TMT formation occurs via the dimerization of DMAN. Though there exist a N=N bonds in the TMT, which are preferable attacking sites for ozone, experimental studies show the lower yields of NDMA formation, which corroborates with the high activation barrier required for the process(42kcal/mol).Overall, our calculated results agree well with the experimental observations and rate constants. Computational calculations bring insights into the electronic nature and kinetics of the elementary reactions of this pathway, enabled by computed energies of structures that are not possible to access experimentally.Keywords: reaction mechanism, ozonation, substituted hydrazine, transition state
Procedia PDF Downloads 821426 Analysis of the Secondary Stationary Flow Around an Oscillating Circular Cylinder
Authors: Artem Nuriev, Olga Zaitseva
Abstract:
This paper is devoted to the study of a viscous incompressible flow around a circular cylinder performing harmonic oscillations, especially the steady streaming phenomenon. The research methodology is based on the asymptotic explanation method combined with the computational bifurcation analysis. Present studies allow to identify several regimes of the secondary streaming with different flow structures. The results of the research are in good agreement with experimental and numerical simulation data.Keywords: oscillating cylinder, secondary streaming, flow regimes, asymptotic and bifurcation analysis
Procedia PDF Downloads 4361425 Study of the Design and Simulation Work for an Artificial Heart
Authors: Mohammed Eltayeb Salih Elamin
Abstract:
This study discusses the concept of the artificial heart using engineering concepts, of the fluid mechanics and the characteristics of the non-Newtonian fluid. For the purpose to serve heart patients and improve aspects of their lives and since the Statistics review according to world health organization (WHO) says that heart disease and blood vessels are the first cause of death in the world. Statistics shows that 30% of the death cases in the world by the heart disease, so simply we can consider it as the number one leading cause of death in the entire world is heart failure. And since the heart implantation become a very difficult and not always available, the idea of the artificial heart become very essential. So it’s important that we participate in the developing this idea by searching and finding the weakness point in the earlier designs and hoping for improving it for the best of humanity. In this study a pump was designed in order to pump blood to the human body and taking into account all the factors that allows it to replace the human heart, in order to work at the same characteristics and the efficiency of the human heart. The pump was designed on the idea of the diaphragm pump. Three models of blood obtained from the blood real characteristics and all of these models were simulated in order to study the effect of the pumping work on the fluid. After that, we study the properties of this pump by using Ansys15 software to simulate blood flow inside the pump and the amount of stress that it will go under. The 3D geometries modeling was done using SOLID WORKS and the geometries then imported to Ansys design modeler which is used during the pre-processing procedure. The solver used throughout the study is Ansys FLUENT. This is a tool used to analysis the fluid flow troubles and the general well-known term used for this branch of science is known as Computational Fluid Dynamics (CFD). Basically, Design Modeler used during the pre-processing procedure which is a crucial step before the start of the fluid flow problem. Some of the key operations are the geometry creations which specify the domain of the fluid flow problem. Next is mesh generation which means discretization of the domain to solve governing equations at each cell and later, specify the boundary zones to apply boundary conditions for the problem. Finally, the pre–processed work will be saved at the Ansys workbench for future work continuation.Keywords: Artificial heart, computational fluid dynamic heart chamber, design, pump
Procedia PDF Downloads 4591424 Problems of Boolean Reasoning Based Biclustering Parallelization
Authors: Marcin Michalak
Abstract:
Biclustering is the way of two-dimensional data analysis. For several years it became possible to express such issue in terms of Boolean reasoning, for processing continuous, discrete and binary data. The mathematical backgrounds of such approach — proved ability of induction of exact and inclusion–maximal biclusters fulfilling assumed criteria — are strong advantages of the method. Unfortunately, the core of the method has quite high computational complexity. In the paper the basics of Boolean reasoning approach for biclustering are presented. In such context the problems of computation parallelization are risen.Keywords: Boolean reasoning, biclustering, parallelization, prime implicant
Procedia PDF Downloads 1251423 Bee Colony Optimization Applied to the Bin Packing Problem
Authors: Kenza Aida Amara, Bachir Djebbar
Abstract:
We treat the two-dimensional bin packing problem which involves packing a given set of rectangles into a minimum number of larger identical rectangles called bins. This combinatorial problem is NP-hard. We propose a pretreatment for the oriented version of the problem that allows the valorization of the lost areas in the bins and the reduction of the size problem. A heuristic method based on the strategy first-fit adapted to this problem is presented. We present an approach of resolution by bee colony optimization. Computational results express a comparison of the number of bins used with and without pretreatment.Keywords: bee colony optimization, bin packing, heuristic algorithm, pretreatment
Procedia PDF Downloads 6341422 Numerical Analysis of the Computational Fluid Dynamics of Co-Digestion in a Large-Scale Continuous Stirred Tank Reactor
Authors: Sylvana A. Vega, Cesar E. Huilinir, Carlos J. Gonzalez
Abstract:
Co-digestion in anaerobic biodigesters is a technology improving hydrolysis by increasing methane generation. In the present study, the dimensional computational fluid dynamics (CFD) is numerically analyzed using Ansys Fluent software for agitation in a full-scale Continuous Stirred Tank Reactor (CSTR) biodigester during the co-digestion process. For this, a rheological study of the substrate is carried out, establishing rotation speeds of the stirrers depending on the microbial activity and energy ranges. The substrate is organic waste from industrial sources of sanitary water, butcher, fishmonger, and dairy. Once the rheological behavior curves have been obtained, it is obtained that it is a non-Newtonian fluid of the pseudoplastic type, with a solids rate of 12%. In the simulation, the rheological results of the fluid are considered, and the full-scale CSTR biodigester is modeled. It was coupling the second-order continuity differential equations, the three-dimensional Navier Stokes, the power-law model for non-Newtonian fluids, and three turbulence models: k-ε RNG, k-ε Realizable, and RMS (Reynolds Stress Model), for a 45° tilt vane impeller. It is simulated for three minutes since it is desired to study an intermittent mixture with a saving benefit of energy consumed. The results show that the absolute errors of the power number associated with the k-ε RNG, k-ε Realizable, and RMS models were 7.62%, 1.85%, and 5.05%, respectively, the numbers of power obtained from the analytical-experimental equation of Nagata. The results of the generalized Reynolds number show that the fluid dynamics have a transition-turbulent flow regime. Concerning the Froude number, the result indicates there is no need to implement baffles in the biodigester design, and the power number provides a steady trend close to 1.5. It is observed that the levels of design speeds within the biodigester are approximately 0.1 m/s, which are speeds suitable for the microbial community, where they can coexist and feed on the substrate in co-digestion. It is concluded that the model that more accurately predicts the behavior of fluid dynamics within the reactor is the k-ε Realizable model. The flow paths obtained are consistent with what is stated in the referenced literature, where the 45° inclination PBT impeller is the right type of agitator to keep particles in suspension and, in turn, increase the dispersion of gas in the liquid phase. If a 24/7 complete mix is considered under stirred agitation, with a plant factor of 80%, 51,840 kWh/year are estimated. On the contrary, if intermittent agitations of 3 min every 15 min are used under the same design conditions, reduce almost 80% of energy costs. It is a feasible solution to predict the energy expenditure of an anaerobic biodigester CSTR. It is recommended to use high mixing intensities, at the beginning and end of the joint phase acetogenesis/methanogenesis. This high intensity of mixing, in the beginning, produces the activation of the bacteria, and once reaching the end of the Hydraulic Retention Time period, it produces another increase in the mixing agitations, favoring the final dispersion of the biogas that may be trapped in the biodigester bottom.Keywords: anaerobic co-digestion, computational fluid dynamics, CFD, net power, organic waste
Procedia PDF Downloads 1151421 Grammar as a Logic of Labeling: A Computer Model
Authors: Jacques Lamarche, Juhani Dickinson
Abstract:
This paper introduces a computational model of a Grammar as Logic of Labeling (GLL), where the lexical primitives of morphosyntax are phonological matrixes, the form of words, understood as labels that apply to realities (or targets) assumed to be outside of grammar altogether. The hypothesis is that even though a lexical label relates to its target arbitrarily, this label in a complex (constituent) label is part of a labeling pattern which, depending on its value (i.e., N, V, Adj, etc.), imposes language-specific restrictions on what it targets outside of grammar (in the world/semantics or in cognitive knowledge). Lexical forms categorized as nouns, verbs, adjectives, etc., are effectively targets of labeling patterns in use. The paper illustrates GLL through a computer model of basic patterns in English NPs. A constituent label is a binary object that encodes: i) alignment of input forms so that labels occurring at different points in time are understood as applying at once; ii) endocentric structuring - every grammatical constituent has a head label that determines the target of the constituent, and a limiter label (the non-head) that restricts this target. The N or A values are restricted to limiter label, the two differing in terms of alignment with a head. Consider the head initial DP ‘the dog’: the label ‘dog’ gets an N value because it is a limiter that is evenly aligned with the head ‘the’, restricting application of the DP. Adapting a traditional analysis of ‘the’ to GLL – apply label to something familiar – the DP targets and identifies one reality familiar to participants by applying to it the label ‘dog’ (singular). Consider next the DP ‘the large dog’: ‘large dog’ is nominal by even alignment with ‘the’, as before, and since ‘dog’ is the head of (head final) ‘large dog’, it is also nominal. The label ‘large’, however, is adjectival by narrow alignment with the head ‘dog’: it doesn’t target the head but targets a property of what dog applies to (a property or value of attribute). In other words, the internal composition of constituents determines that a form targets a property or a reality: ‘large’ and ‘dog’ happen to be valid targets to realize this constituent. In the presentation, the computer model of the analysis derives the 8 possible sequences of grammatical values with three labels after the determiner (the x y z): 1- D [ N [ N N ]]; 2- D [ A [ N N ] ]; 3- D [ N [ A N ] ]; 4- D [ A [ A N ] ]; 5- D [ [ N N ] N ]; 5- D [ [ A N ] N ]; 6- D [ [ N A ] N ] 7- [ [ N A ] N ] 8- D [ [ Adv A ] N ]. This approach that suggests that a computer model of these grammatical patterns could be used to construct ontologies/knowledge using speakers’ judgments about the validity of lexical meaning in grammatical patterns.Keywords: syntactic theory, computational linguistics, logic and grammar, semantics, knowledge and grammar
Procedia PDF Downloads 381420 Ischemic Stroke Detection in Computed Tomography Examinations
Authors: Allan F. F. Alves, Fernando A. Bacchim Neto, Guilherme Giacomini, Marcela de Oliveira, Ana L. M. Pavan, Maria E. D. Rosa, Diana R. Pina
Abstract:
Stroke is a worldwide concern, only in Brazil it accounts for 10% of all registered deaths. There are 2 stroke types, ischemic (87%) and hemorrhagic (13%). Early diagnosis is essential to avoid irreversible cerebral damage. Non-enhanced computed tomography (NECT) is one of the main diagnostic techniques used due to its wide availability and rapid diagnosis. Detection depends on the size and severity of lesions and the time spent between the first symptoms and examination. The Alberta Stroke Program Early CT Score (ASPECTS) is a subjective method that increases the detection rate. The aim of this work was to implement an image segmentation system to enhance ischemic stroke and to quantify the area of ischemic and hemorrhagic stroke lesions in CT scans. We evaluated 10 patients with NECT examinations diagnosed with ischemic stroke. Analyzes were performed in two axial slices, one at the level of the thalamus and basal ganglion and one adjacent to the top edge of the ganglionic structures with window width between 80 and 100 Hounsfield Units. We used different image processing techniques such as morphological filters, discrete wavelet transform and Fuzzy C-means clustering. Subjective analyzes were performed by a neuroradiologist according to the ASPECTS scale to quantify ischemic areas in the middle cerebral artery region. These subjective analysis results were compared with objective analyzes performed by the computational algorithm. Preliminary results indicate that the morphological filters actually improve the ischemic areas for subjective evaluations. The comparison in area of the ischemic region contoured by the neuroradiologist and the defined area by computational algorithm showed no deviations greater than 12% in any of the 10 examination tests. Although there is a tendency that the areas contoured by the neuroradiologist are smaller than those obtained by the algorithm. These results show the importance of a computer aided diagnosis software to assist neuroradiology decisions, especially in critical situations as the choice of treatment for ischemic stroke.Keywords: ischemic stroke, image processing, CT scans, Fuzzy C-means
Procedia PDF Downloads 3661419 A Computational Fluid Dynamics Simulation of Single Rod Bundles with 54 Fuel Rods without Spacers
Authors: S. K. Verma, S. L. Sinha, D. K. Chandraker
Abstract:
The Advanced Heavy Water Reactor (AHWR) is a vertical pressure tube type, heavy water moderated and boiling light water cooled natural circulation based reactor. The fuel bundle of AHWR contains 54 fuel rods arranged in three concentric rings of 12, 18 and 24 fuel rods. This fuel bundle is divided into a number of imaginary interacting flow passage called subchannels. Single phase flow condition exists in reactor rod bundle during startup condition and up to certain length of rod bundle when it is operating at full power. Prediction of the thermal margin of the reactor during startup condition has necessitated the determination of the turbulent mixing rate of coolant amongst these subchannels. Thus, it is vital to evaluate turbulent mixing between subchannels of AHWR rod bundle. With the remarkable progress in the computer processing power, the computational fluid dynamics (CFD) methodology can be useful for investigating the thermal–hydraulic characteristics phenomena in the nuclear fuel assembly. The present report covers the results of simulation of pressure drop, velocity variation and turbulence intensity on single rod bundle with 54 rods in circular arrays. In this investigation, 54-rod assemblies are simulated with ANSYS Fluent 15 using steady simulations with an ANSYS Workbench meshing. The simulations have been carried out with water for Reynolds number 9861.83. The rod bundle has a mean flow area of 4853.0584 mm2 in the bare region with the hydraulic diameter of 8.105 mm. In present investigation, a benchmark k-ε model has been used as a turbulence model and the symmetry condition is set as boundary conditions. Simulation are carried out to determine the turbulent mixing rate in the simulated subchannels of the reactor. The size of rod and the pitch in the test has been same as that of actual rod bundle in the prototype. Water has been used as the working fluid and the turbulent mixing tests have been carried out at atmospheric condition without heat addition. The mean velocity in the subchannel has been varied from 0-1.2 m/s. The flow conditions are found to be closer to the actual reactor condition.Keywords: AHWR, CFD, single-phase turbulent mixing rate, thermal–hydraulic
Procedia PDF Downloads 3201418 A Comparative Analysis of Lexical Bundles in Academic Writing: Insights from Persian and Native English Writers in Applied Linguistics
Authors: Elham Shahrjooi Haghighi
Abstract:
This research explores how lexical bundles are utilized in writing in the field of linguistics by comparing professional Persian writers with native English writers using corpus-based studies and advanced computational techniques to examine the occurrence and characteristics of lexical bundles in academic writings. The review of literature emphasizes how important lexical bundles are, in organizing discussions and conveying opinions in both spoken and written language contexts across genres and proficiency levels in fields of study. Previous research has indicated that native English writers tend to employ an array and diversity of bundles than non-native writers do; these bundles are essential elements in academic writing. In this study’s methodology section, the research utilizes a corpus-based method to analyze a collection of writings such as research papers and advanced theses at the doctoral and masters’ levels. The examination uncovers variances in the utilization of groupings between writers who are native speakers of Persian and those who are native English speakers with the latter group displaying a greater occurrence and variety, in types of groupings. Furthermore, the research delves into how these groupings contribute to aspects classifying them into categories based on their relevance to research text structure and individuals as outlined in Hyland’s framework. The results show that Persian authors employ phrases and demonstrate distinct structural and functional tendencies in comparison to native English writers. This variation is linked to differing language skills, levels, disciplinary norms and cultural factors. The study also highlights the pedagogical implications of these findings, suggesting that targeted instruction on the use of lexical bundles could enhance the academic writing skills of non-native speakers. In conclusion, this research contributes to the understanding of lexical bundles in academic writing by providing a detailed comparative analysis of their use by Persian and native English writers. The insights from this study have important implications for language education and the development of effective writing strategies for non-native English speakers in academic contexts.Keywords: lexical bundles, academic writing, comparative analysis, computational techniques
Procedia PDF Downloads 211417 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana
Authors: Ayesha Sanjana Kawser Parsha
Abstract:
S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score
Procedia PDF Downloads 771416 Design and Implementation of Wireless Syncronized AI System for Security
Authors: Saradha Priya
Abstract:
Developing virtual human is very important to meet the challenges occurred in many applications where human find difficult or risky to perform the task. A robot is a machine that can perform a task automatically or with guidance. Robotics is generally a combination of artificial intelligence and physical machines (motors). Computational intelligence involves the programmed instructions. This project proposes a robotic vehicle that has a camera, PIR sensor and text command based movement. It is specially designed to perform surveillance and other few tasks in the most efficient way. Serial communication has been occurred between a remote Base Station, GUI Application, and PC.Keywords: Zigbee, camera, pirsensor, wireless transmission, DC motor
Procedia PDF Downloads 3491415 LTE Modelling of a DC Arc Ignition on Cold Electrodes
Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov
Abstract:
The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes
Procedia PDF Downloads 1221414 Series Solutions to Boundary Value Differential Equations
Authors: Armin Ardekani, Mohammad Akbari
Abstract:
We present a method of generating series solutions to large classes of nonlinear differential equations. The method is well suited to be adapted in mathematical software and unlike the available commercial solvers, we are capable of generating solutions to boundary value ODEs and PDEs. Many of the generated solutions converge to closed form solutions. Our method can also be applied to systems of ODEs or PDEs, providing all the solutions efficiently. As examples, we present results to many difficult differential equations in engineering fields.Keywords: computational mathematics, differential equations, engineering, series
Procedia PDF Downloads 3361413 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution
Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud
Abstract:
In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch
Procedia PDF Downloads 4211412 A Numerical Study on the Influence of CO2 Dilution on Combustion Characteristics of a Turbulent Diffusion Flame
Authors: Yasaman Tohidi, Rouzbeh Riazi, Shidvash Vakilipour, Masoud Mohammadi
Abstract:
The objective of the present study is to numerically investigate the effect of CO2 replacement of N2 in air stream on the flame characteristics of the CH4 turbulent diffusion flame. The Open source Field Operation and Manipulation (OpenFOAM) has been used as the computational tool. In this regard, laminar flamelet and modified k-ε models have been utilized as combustion and turbulence models, respectively. Results reveal that the presence of CO2 in air stream changes the flame shape and maximum flame temperature. Also, CO2 dilution causes an increment in CO mass fraction.Keywords: CH4 diffusion flame, CO2 dilution, OpenFOAM, turbulent flame
Procedia PDF Downloads 276