Search results for: parallel computations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1364

Search results for: parallel computations

1154 Experimental Study of the Fiber Dispersion of Pulp Liquid Flow in Channels with Application to Papermaking

Authors: Masaru Sumida

Abstract:

This study explored the feasibility of improving the hydraulic headbox of papermaking machines by studying the flow of wood-pulp suspensions behind a flat plate inserted in parallel and convergent channels. Pulp fiber concentrations of the wake downstream of the plate were investigated by flow visualization and optical measurements. Changes in the time-averaged and fluctuation of the fiber concentration along the flow direction were examined. In addition, the control of the flow characteristics in the two channels was investigated. The behaviors of the pulp fibers and the wake flow were found to be strongly related to the flow states in the upstream passages partitioned by the plate. The distribution of the fiber concentration was complex because of the formation of a thin water layer on the plate and the generation of Karman’s vortices at the trailing edge of the plate. Compared with the flow in the parallel channel, fluctuations in the fiber concentration decreased in the convergent channel. However, at low flow velocities, the convergent channel has a weak effect on equilibrating the time-averaged fiber concentration. This shows that a rectangular trailing edge cannot adequately disperse pulp suspensions; thus, at low flow velocities, a convergent channel is ineffective in ensuring uniform fiber concentration.

Keywords: fiber dispersion, headbox, pulp liquid, wake flow

Procedia PDF Downloads 385
1153 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 127
1152 Learning to Translate by Learning to Communicate to an Entailment Classifier

Authors: Szymon Rutkowski, Tomasz Korbak

Abstract:

We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.

Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning

Procedia PDF Downloads 127
1151 Reliability Analysis of a Life Support System in a Public Aquarium

Authors: Mehmet Savsar

Abstract:

Complex Life Support Systems (LSS) are used in all large commercial and public aquariums in order to keep the fish alive. Reliabilities of individual equipment, as well as the complete system, are extremely important and critical since the life and safety of important fish depend on these life support systems. Failure of some critical device or equipment, which do not have redundancy, results in negative consequences and affects life support as a whole. In this paper, we have considered a life support system in a large public aquarium in Kuwait Scientific Center and presented a procedure and analysis to show how the reliability of such systems can be estimated by using appropriate tools and collected data. We have also proposed possible improvements for systems reliability. In particular, addition of parallel components and spare parts are considered and the numbers of spare parts needed for each component to achieve a required reliability during specified lead time are calculated. The results show that significant improvements in system reliability can be achieved by operating some LSS components in parallel and having certain numbers of spares available in the spare parts inventories. The procedures and the results presented in this paper are expected to be useful for aquarium engineers and maintenance managers dealing with LSS.

Keywords: life support systems, aquariums, reliability, failures, availability, spare parts

Procedia PDF Downloads 280
1150 Unconfined Laminar Nanofluid Flow and Heat Transfer around a Square Cylinder with an Angle of Incidence

Authors: Rafik Bouakkaz

Abstract:

A finite-volume method simulation is used to investigate two dimensional unsteady flow of nanofluids and heat transfer characteristics past a square cylinder inclined with respect to the main flow in the laminar regime. The computations are carried out of nanoparticle volume fractions varying from 0 ≤ ∅ ≤ 5% for an inclination angle in the range 0° ≤ δ ≤ 45° at a Reynolds number of 100. The variation of stream line and isotherm patterns are presented for the above range of conditions. Also, it is noticed that the addition of nanoparticles enhances the heat transfer. Hence, the local Nusselt number is found to increase with increasing value of the concentration of nanoparticles for the fixed value of the inclination angle.

Keywords: copper nanoparticles, heat transfer, square cylinder, inclination angle

Procedia PDF Downloads 190
1149 Supplemental VisCo-friction Damping for Dynamical Structural Systems

Authors: Sharad Singh, Ajay Kumar Sinha

Abstract:

Coupled dampers like viscoelastic-frictional dampers for supplemental damping are a newer technique. In this paper, innovative Visco-frictional damping models have been presented and investigated. This paper attempts to couple frictional and fluid viscous dampers into a single unit of supplemental dampers. Visco-frictional damping model is developed by series and parallel coupling of frictional and fluid viscous dampers using Maxwell and Kelvin-Voigat models. The time analysis has been performed using numerical simulation on an SDOF system with varying fundamental periods, subject to a set of 12 ground motions. The simulation was performed using the direct time integration method. MATLAB programming tool was used to carry out the numerical simulation. The response behavior has been analyzed for the varying time period and added damping. This paper compares the response reduction behavior of the two modes of coupling. This paper highlights the performance efficiency of the suggested damping models. It also presents a mathematical modeling approach to visco-frictional dampers and simultaneously suggests the suitable mode of coupling between the two sub-units.

Keywords: hysteretic damping, Kelvin model, Maxwell model, parallel coupling, series coupling, viscous damping

Procedia PDF Downloads 157
1148 "Prezafe" to "Parizafe": Parallel Development of Izafe in Germanic

Authors: Yexin Qu

Abstract:

Izafe is a construction typically found in Iranian languages, which is attested already in Old Avestan and Old Persian. The narrow sense of izafe can be described as the linear structure of [NP pt Modifier] with pt as an uninflectable particle or clitic. The history of the Iranian izafe has the following stages: Stage I: Verbless nominal relative clauses, Stage II: Verbless nominal relative clauses with Case Attraction; and Stage III: Narrow sense izafe. Previous works suggest that embedded relative clauses and correlatives in other Indo-European languages might be relevant for the source of the izafe-construction. Stage I, as the precursor of narrow sense izafe, or so-called “prezafe” is not found in branches other than Iranian. Comparable cases have been demonstrated in Vedic, Greek, and some rare cases in Latin. This suggests “prezafe” may date back very early in Indo-European. Izafe-like structures are not attested in branches such as Balto-Slavic and Germanic, but Balto-Slavic definite adjectives and Germanic weak adjectives can be compared to the verbless nominal relative clauses and analyzed as developments of verbless relative clauses parallel to izafe in Indo-Iranian, as are called “parizafe” in this paper. In this paper, the verbless RC is compared with Germanic weak adjectives. The Germanic languages used n-stem derivation to form determined derivatives, which are semantically equivalent to the appositive RC and eventually became weak adjectives. To be more precise, starting from an adjective “X”, the Germanic weak adjective structure is formed as [det X-n], literally “the X”, with the meaning “the X one”, which can be shown to be semantically equivalent to “the one which is X”. In this paper, Stage I suggest that, syntactically, the Germanic verbless relative clauses went through CP to DP relabeling like Iranian, based on the following observations: (1) Germanic relative pronouns (e.g., Gothic saei, Old English se) and determiners (e.g., Gothic sa, Old English se) are both from the *so/to pronominal roots; (2) the semantic equivalence of Germanic weak adjectives and the izafe structure. This may suggest that Germanic may also have had “Prezafe” Stages I and II. In conclusion: “Prezafe” in Stage I may have been a phenomenon of the proto-language, Stage II was the result of independent parallel developments and then each branch had its own strategy.

Keywords: izafe, relative clause, Germanic, Indo-European

Procedia PDF Downloads 67
1147 Undrained Bearing Capacity of Circular Foundations on two Layered Clays

Authors: S. Benmebarek, S. Benmoussa, N. Benmebarek

Abstract:

Natural soils are often deposited in layers. The estimation of the bearing capacity of the soil using conventional bearing capacity theory based on the properties of the upper layer introduces significant inaccuracies if the thickness of the top layer is comparable to the width of the foundation placed on the soil surface. In this paper, numerical computations using the FLAC code are reported to evaluate the two clay layers effect on the bearing capacity beneath rigid circular rough footing subject to axial static load. The computation results of the parametric study are used to illustrate the sensibility of the bearing capacity, the shape factor and the failure mechanisms to the layered strength and layered thickness.

Keywords: numerical modeling, circular footings, layered clays, bearing capacity, failure

Procedia PDF Downloads 495
1146 Analytical Soliton Solutions of the Fractional Jaulent-Miodek System

Authors: Sajeda Elbashabsheh, Kamel Al-Khaled

Abstract:

This paper applies a modified Laplace Adomian decomposition method to solve the time-fractional JaulentMiodek system. The method produce convergent series solutions with easily compatible components. This paper considers the Caputo fractional derivative. The effectiveness and applicability of the method are demonstrated by comparing its results with those of prior studies. Results are presented in tables and figures. These solutions might be imperative and significant for the explanation of some practical physical phenomena. All computations and figures in the work are done using MATHEMATICA. The numerical results demonstrate that the current methods are effective, reliable, and simple to i implement for nonlinear fractional partial differential equations.

Keywords: approximate solutions, Jaulent-Miodek system, Adomian decomposition method, solitons

Procedia PDF Downloads 43
1145 Lifting Wavelet Transform and Singular Values Decomposition for Secure Image Watermarking

Authors: Siraa Ben Ftima, Mourad Talbi, Tahar Ezzedine

Abstract:

In this paper, we present a technique of secure watermarking of grayscale and color images. This technique consists in applying the Singular Value Decomposition (SVD) in LWT (Lifting Wavelet Transform) domain in order to insert the watermark image (grayscale) in the host image (grayscale or color image). It also uses signature in the embedding and extraction steps. The technique is applied on a number of grayscale and color images. The performance of this technique is proved by the PSNR (Pick Signal to Noise Ratio), the MSE (Mean Square Error) and the SSIM (structural similarity) computations.

Keywords: lifting wavelet transform (LWT), sub-space vectorial decomposition, secure, image watermarking, watermark

Procedia PDF Downloads 275
1144 Analyzing the Effectiveness of a Bank of Parallel Resistors, as a Burden Compensation Technique for Current Transformer's Burden, Using LabVIEW™ Data Acquisition Tool

Authors: Dilson Subedi

Abstract:

Current transformers are an integral part of power system because it provides a proportional safe amount of current for protection and measurement applications. However, due to upgradation of electromechanical relays to numerical relays and electromechanical energy meters to digital meters, the connected burden, which defines some of the CT characteristics, has drastically reduced. This has led to the system experiencing high currents damaging the connected relays and meters. Since the protection and metering equipment's are designed to withstand only certain amount of current with respect to time, these high currents pose a risk to man and equipment. Therefore, during such instances, the CT saturation characteristics have a huge influence on the safety of both man and equipment and on the reliability of the protection and metering system. This paper shows the effectiveness of a bank of parallel connected resistors, as a burden compensation technique, in compensating the burden of under-burdened CT’s. The response of the CT in the case of failure of one or more resistors at different levels of overcurrent will be captured using the LabVIEWTM data acquisition hardware (DAQ). The analysis is done on the real-time data gathered using LabVIEWTM. Variation of current transformer saturation characteristics with changes in burden will be discussed.

Keywords: accuracy limiting factor, burden, burden compensation, current transformer

Procedia PDF Downloads 245
1143 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing

Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares

Abstract:

In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.

Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms

Procedia PDF Downloads 189
1142 Parallel Gripper Modelling and Design Optimization Using Multi-Objective Grey Wolf Optimizer

Authors: Golak Bihari Mahanta, Bibhuti Bhusan Biswal, B. B. V. L. Deepak, Amruta Rout, Gunji Balamurali

Abstract:

Robots are widely used in the manufacturing industry for rapid production with higher accuracy and precision. With the help of End-of-Arm Tools (EOATs), robots are interacting with the environment. Robotic grippers are such EOATs which help to grasp the object in an automation system for improving the efficiency. As the robotic gripper directly influence the quality of the product due to the contact between the gripper surface and the object to be grasped, it is necessary to design and optimize the gripper mechanism configuration. In this study, geometric and kinematic modeling of the parallel gripper is proposed. Grey wolf optimizer algorithm is introduced for solving the proposed multiobjective gripper optimization problem. Two objective functions developed from the geometric and kinematic modeling along with several nonlinear constraints of the proposed gripper mechanism is used to optimize the design variables of the systems. Finally, the proposed methodology compared with a previously proposed method such as Teaching Learning Based Optimization (TLBO) algorithm, NSGA II, MODE and it was seen that the proposed method is more efficient compared to the earlier proposed methodology.

Keywords: gripper optimization, metaheuristics, , teaching learning based algorithm, multi-objective optimization, optimal gripper design

Procedia PDF Downloads 188
1141 Numerical Solution of 1-D Shallow Water Equations at Junction for Sub-Critical and Super-Critical Flow

Authors: Mohamed Elshobaki, Alessandro Valiani, Valerio Caleffi

Abstract:

In this paper, we solve 1-D shallow water equation for sub-critical and super-critical water flow at junction. The water flow at junction has been studied for the last 50 years from the physical-hydraulic point of views and for numerical computations need more attention. For numerical simulation, we need to establish an inner boundary condition at the junction to avoid an oscillation which rise from the waves interactions at the junction. Indeed, we introduce a new boundary condition at the junction based on the mass conservation, total head, and the admissible wave relations between the flow parameters in the three branches to predict the water depths and discharges at the junction. These boundary conditions are valid for sub-critical flow and super-critical flow.

Keywords: numerical simulation, junction flow, sub-critical flow, super-critical flow

Procedia PDF Downloads 509
1140 Potential Distribution and Electric Field Analysis around a Polluted Outdoor Polymeric Insulator with Broken Sheds

Authors: Adel Kara, Abdelhafid Bayadi, Hocine Terrab

Abstract:

This paper presents a study of electric field distribution along of 72 kV polymeric outdoor insulators with broken sheds. Different cases of damaged insulators are modeled and both of clean and polluted cases. By 3D finite element analysis using the software package COMSOL Multiphysics 4.3b. The obtained results of potential and the electrical field distribution around insulators by 3D simulation proved that finite element computations is useful tool for studying insulation electrical field distribution.

Keywords: electric field distributions, insulator, broken sheds, potential distributions

Procedia PDF Downloads 511
1139 Graphic Procession Unit-Based Parallel Processing for Inverse Computation of Full-Field Material Properties Based on Quantitative Laser Ultrasound Visualization

Authors: Sheng-Po Tseng, Che-Hua Yang

Abstract:

Motivation and Objective: Ultrasonic guided waves become an important tool for nondestructive evaluation of structures and components. Guided waves are used for the purpose of identifying defects or evaluating material properties in a nondestructive way. While guided waves are applied for evaluating material properties, instead of knowing the properties directly, preliminary signals such as time domain signals or frequency domain spectra are first revealed. With the measured ultrasound data, inversion calculation can be further employed to obtain the desired mechanical properties. Methods: This research is development of high speed inversion calculation technique for obtaining full-field mechanical properties from the quantitative laser ultrasound visualization system (QLUVS). The quantitative laser ultrasound visualization system (QLUVS) employs a mirror-controlled scanning pulsed laser to generate guided acoustic waves traveling in a two-dimensional target. Guided waves are detected with a piezoelectric transducer located at a fixed location. With a gyro-scanning of the generation source, the QLUVS has the advantage of fast, full-field, and quantitative inspection. Results and Discussions: This research introduces two important tools to improve the computation efficiency. Firstly, graphic procession unit (GPU) with large amount of cores are introduced. Furthermore, combining the CPU and GPU cores, parallel procession scheme is developed for the inversion of full-field mechanical properties based on the QLUVS data. The newly developed inversion scheme is applied to investigate the computation efficiency for single-layered and double-layered plate-like samples. The computation efficiency is shown to be 80 times faster than unparalleled computation scheme. Conclusions: This research demonstrates a high-speed inversion technique for the characterization of full-field material properties based on quantitative laser ultrasound visualization system. Significant computation efficiency is shown, however not reaching the limit yet. Further improvement can be reached by improving the parallel computation. Utilizing the development of the full-field mechanical property inspection technology, full-field mechanical property measured by non-destructive, high-speed and high-precision measurements can be obtained in qualitative and quantitative results. The developed high speed computation scheme is ready for applications where full-field mechanical properties are needed in a nondestructive and nearly real-time way.

Keywords: guided waves, material characterization, nondestructive evaluation, parallel processing

Procedia PDF Downloads 201
1138 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming

Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero

Abstract:

Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.

Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up

Procedia PDF Downloads 243
1137 A Development of Holonomic Mobile Robot Using Fuzzy Multi-Layered Controller

Authors: Seungwoo Kim, Yeongcheol Cho

Abstract:

In this paper, a holonomic mobile robot is designed in omnidirectional wheels and an adaptive fuzzy controller is presented for its precise trajectories. A kind of adaptive controller based on fuzzy multi-layered algorithm is used to solve the big parametric uncertainty of motor-controlled dynamic system of 3-wheels omnidirectional mobile robot. The system parameters such as a tracking force are so time-varying due to the kinematic structure of omnidirectional wheels. The fuzzy adaptive control method is able to solve the problems of classical adaptive controller and conventional fuzzy adaptive controllers. The basic idea of new adaptive control scheme is that an adaptive controller can be constructed with parallel combination of robust controllers. This new adaptive controller uses a fuzzy multi-layered architecture which has several independent fuzzy controllers in parallel, each with different robust stability area. Out of several independent fuzzy controllers, the most suited one is selected by a system identifier which observes variations in the controlled system parameter. This paper proposes a design procedure which can be carried out mathematically and systematically from the model of a controlled system. Finally, the good performance of a holonomic mobile robot is confirmed through live tests of the tracking control task.

Keywords: fuzzy adaptive control, fuzzy multi-layered controller, holonomic mobile robot, omnidirectional wheels, robustness and stability.

Procedia PDF Downloads 358
1136 Identified Transcription Factors and Gene Regulation in Scient Biosynthesis in Ophrys Orchids

Authors: Chengwei Wang, Shuqing Xu, Philipp M. Schlüter

Abstract:

The genus Ophrys is remarkable for its mimicry, flower-lip closely resembling pollinator females in a species-specific manner. Therefore, floral traits associated with pollinator attraction, especially scent, are suitable models for investigating the molecular basis of adaption, speciation, and evolution. Within the two Ophrys species groups: O. sphegodes (S) and O. fusca (F), pollinator shifts among the same insect species have taken place. Preliminary data suggest that they involve a comparable hydrocarbon profile in their scent, which is mainly composed of alkanes and alkenes. Genes encoding stearoyl-acyl carrier protein desaturases (SAD) involved in alkene biosynthesis have been identified in the S group. This study aims to investigate the control and parallel evolution of ecologically significant alkene production in Ophrys. Owing to the central role those SAD genes play in determining positioning of the alkene double-bonds, a detailed understanding of their functional mechanism and of regulatory aspects is of utmost importance. We have identified 5 transcription factors potentially related to SAD expression in O. sphegodes which belong to the MYB, GTE, WRKY, and MADS families. Ultimately, our results will contribute to understanding genes important in the regulatory control of floral scent synthesis.

Keywords: floral traits, transcription factors, biosynthesis, parallel evolution

Procedia PDF Downloads 100
1135 A Low Cost Non-Destructive Grain Moisture Embedded System for Food Safety and Quality

Authors: Ritula Thakur, Babankumar S. Bansod, Puneet Mehta, S. Chatterji

Abstract:

Moisture plays an important role in storage, harvesting and processing of food grains and related agricultural products. It is an important characteristic of most agricultural products for maintenance of quality. Accurate knowledge of the moisture content can be of significant value in maintaining quality and preventing contamination of cereal grains. The present work reports the design and development of microcontroller based low cost non-destructive moisture meter, which uses complex impedance measurement method for moisture measurement of wheat using parallel plate capacitor arrangement. Moisture can conveniently be sensed by measuring the complex impedance using a small parallel-plate capacitor sensor filled with the kernels in-between the two plates of sensor, exciting the sensor at 30 KHz and 100 KHz frequencies. The effects of density and temperature variations were compensated by providing suitable compensations in the developed algorithm. The results were compared with standard dry oven technique and the developed method was found to be highly accurate with less than 1% error. The developed moisture meter is low cost, highly accurate, non-destructible method for determining the moisture of grains utilizing the fast computing capabilities of microcontroller.

Keywords: complex impedance, moisture content, electrical properties, safety of food

Procedia PDF Downloads 462
1134 The Simulation and Experimental Investigation to Study the Strain Distribution Pattern during the Closed Die Forging Process

Authors: D. B. Gohil

Abstract:

Closed die forging is a very complex process, and measurement of actual forces for real material is difficult and time consuming. Hence, the modelling technique has taken the advantage of carrying out the experimentation with the proper model material which needs lesser forces and relatively low temperature. The results of experiments on the model material then may be correlated with the actual material by using the theory of similarity. There are several methods available to resolve the complexity involved in the closed die forging process. Finite Element Method (FEM) and Finite Difference Method (FDM) are relatively difficult as compared to the slab method. The slab method is very popular and very widely used by the people working on shop floor because it is relatively easy to apply and reasonably accurate for most of the common forging load requirement computations.

Keywords: experimentation, forging, process modeling, strain distribution

Procedia PDF Downloads 200
1133 Effect of Segregation Pattern of Mn, Si, and C on through Thickness Microstructure and Properties of Hot Rolled Steel

Authors: Waleed M. Al-Othman, Hamid Bayati, Abdullah Al-Shahrani, Haitham Al-Jabr

Abstract:

Pearlite bands commonly form parallel to the surface of the hot rolled steel and have significant influence on the properties of the steel. This study investigated the correlation between segregation pattern of Mn, Si, C and formation of the pearlite bands in hot rolled Gr 60 steel plate. Microstructural study indicated formation of a distinguished thick band at centerline of the plate with number of parallel bands through thickness of the steel plate. The thickness, frequency, and continuity of the bands are reduced from mid-thickness toward external surface of the steel plate. Analysis showed a noticeable increase of C, Si and Mn levels within the bands. Such alloying segregation takes place during metal solidification. EDS analysis verified presence of particles rich in Ti, Nb, Mn, C, N, within the bands. Texture analysis by Electron Backscatter Detector (EBSD) indicated the grains size/misorientation can noticeably change within the bands. Effect of banding on through-thickness properties of the steel was examined by carrying out microhardness, toughness and tensile tests. Results suggest the Mn and C contents are changed in sinusoidal pattern through thickness of the hot rolled plate and pearlite bands are formed at the peaks of this sinusoidal segregation pattern. Changes in grain size/misorientation, formation of highly alloyed particles, and pearlite within these bands, facilitate crack formation along boundaries of these bands.

Keywords: pearlite band, alloying segregation, hot rolling, Ti, Nb, N, C

Procedia PDF Downloads 136
1132 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations

Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle

Abstract:

This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.

Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal

Procedia PDF Downloads 117
1131 Unified Gas-Kinetic Scheme for Gas-Particle Flow in Shock-Induced Fluidization of Particles Bed

Authors: Zhao Wang, Hong Yan

Abstract:

In this paper, a unified-gas kinetic scheme (UGKS) for the gas-particle flow is constructed. UGKS is a direct modeling method for both continuum and rarefied flow computations. The dynamics of particle and gas are described as rarefied and continuum flow, respectively. Therefore, we use the Bhatnagar-Gross-Krook (BGK) equation for the particle distribution function. For the gas phase, the gas kinetic scheme for Navier-Stokes equation is solved. The momentum transfer between gas and particle is achieved by the acceleration term added to the BGK equation. The new scheme is tested by a 2cm-in-thickness dense bed comprised of glass particles with 1.5mm in diameter, and reasonable agreement is achieved.

Keywords: gas-particle flow, unified gas-kinetic scheme, momentum transfer, shock-induced fluidization

Procedia PDF Downloads 259
1130 Numerical Simulation of Plasma Actuator Using OpenFOAM

Authors: H. Yazdani, K. Ghorbanian

Abstract:

This paper deals with modeling and simulation of the plasma actuator with OpenFOAM. Plasma actuator is one of the newest devices in flow control techniques which can delay separation by inducing external momentum to the boundary layer of the flow. The effects of the plasma actuators on the external flow are incorporated into Navier-Stokes computations as a body force vector which is obtained as a product of the net charge density and the electric field. In order to compute this body force vector, the model solves two equations: One for the electric field due to the applied AC voltage at the electrodes and the other for the charge density representing the ionized air. The simulation result is compared to the experimental and typical values which confirms the validity of the modeling.

Keywords: active flow control, flow-field, OpenFOAM, plasma actuator

Procedia PDF Downloads 305
1129 The Mediating Role of Store Personality in the Relationship Between Self-Congruity and Manifestations of Loyalty

Authors: María de los Ángeles Crespo López, Carmen García García

Abstract:

The highly competitive nature of today's globalised marketplace requires that brands and stores develop effective commercial strategies to ensure their economic survival. Maintaining the loyalty of existing customers constitutes one key strategy that yields the best results. Although the relationship between consumers' self-congruity and their manifestations of loyalty towards a store has been investigated, the role of store personality in this relationship remains unclear. In this study, multiple parallel mediation analysis was used to examine the effect of Store Personality on the relationship between Self-Congruity of consumers and their Manifestations of Loyalty. For this purpose, 457 Spanish consumers of the Fnac store completed three self-report questionnaires assessing Store Personality, Self-Congruity, and Store Loyalty. The data were analyzed using the SPSS macro PROCESS. The results revealed that three dimensions of Store Personality, namely Exciting, Close and Competent Store, positively and significantly mediated the relationship between Self-Congruity and Manifestations of Loyalty. The indirect effect of Competent Store was the greatest. This means that a consumer with higher levels of Self-Congruity with the store will exhibit more Manifestations of Loyalty when the store is perceived as Exciting, Close or Competent. These findings suggest that more attention should be paid to the perceived personality of stores for the development of effective marketing strategies to maintain or increase consumers' manifestations of loyalty towards stores.

Keywords: multiple parallel mediation, PROCESS, self-congruence, store loyalty, store personality

Procedia PDF Downloads 157
1128 Aperiodic and Asymmetric Fibonacci Quasicrystals: Next Big Future in Quantum Computation

Authors: Jatindranath Gain, Madhumita DasSarkar, Sudakshina Kundu

Abstract:

Quantum information is stored in states with multiple quasiparticles, which have a topological degeneracy. Topological quantum computation is concerned with two-dimensional many body systems that support excitations. Anyons are elementary building block of quantum computations. When anyons tunneling in a double-layer system can transition to an exotic non-Abelian state and produce Fibonacci anyons, which are powerful enough for universal topological quantum computation (TQC).Here the exotic behavior of Fibonacci Superlattice is studied by using analytical transfer matrix methods and hence Fibonacci anyons. This Fibonacci anyons can build a quantum computer which is very emerging and exciting field today’s in Nanophotonics and quantum computation.

Keywords: quantum computing, quasicrystals, Multiple Quantum wells (MQWs), transfer matrix method, fibonacci anyons, quantum hall effect, nanophotonics

Procedia PDF Downloads 388
1127 Numerical Study of the Dynamic Behavior of an Air Conditioning with a Muti Confined Swirling Jet

Authors: Mohamed Roudane

Abstract:

The objective of this study is to know the dynamic behavior of a multi swirling jet used for air conditioning inside a room. To conduct this study, we designed a facility to ensure proper conditions of confinement in which we placed five air blowing devices with adjustable vanes, providing multiple swirling turbulent jets. The jets were issued in the same direction and the same spacing defined between them. This study concerned the numerical simulation of the dynamic mixing of confined swirling multi-jets, and examined the influence of important parameters of a swirl diffuser system on the dynamic performance characteristics. The CFD investigations are carried out by a hybrid mesh to discretize the computational domain. In this work, the simulations have been performed using the finite volume method and FLUENT solver, in which the standard k-ε RNG turbulence model was used for turbulence computations.

Keywords: simulation, dynamic behavior, swirl, turbulent jet

Procedia PDF Downloads 398
1126 Applying And Connecting The Microgrid Of Artificial Intelligence In The Form Of A Spiral Model To Optimize Renewable Energy Sources

Authors: PR

Abstract:

Renewable energy is a sustainable substitute to fossil fuels, which are depleting and attributing to global warming as well as greenhouse gas emissions. Renewable energy innovations including solar, wind, and geothermal have grown significantly and play a critical role in meeting energy demands recently. Consequently, Artificial Intelligence (AI) could further enhance the benefits of renewable energy systems. The combination of renewable technologies and AI could facilitate the development of smart grids that can better manage energy distribution and storage. AI thus has the potential to optimize the efficiency and reliability of renewable energy systems, reduce costs, and improve their overall performance. The conventional methods of using smart micro-grids are to connect these micro-grids in series or parallel or a combination of series and parallel. Each of these methods has its advantages and disadvantages. In this study, the proposal of using the method of connecting microgrids in a spiral manner is investigated. One of the important reasons for choosing this type of structure is the two-way reinforcement and exchange of each inner layer with the outer and upstream layer. With this model, we have the ability to increase energy from a small amount to a significant amount based on exponential functions. The geometry used to close the smart microgrids is based on nature.This study provides an overview of the applications of algorithms and models of AI as well as its advantages and challenges in renewable energy systems.

Keywords: artificial intelligence, renewable energy sources, spiral model, optimize

Procedia PDF Downloads 8
1125 Efficient Subgoal Discovery for Hierarchical Reinforcement Learning Using Local Computations

Authors: Adrian Millea

Abstract:

In hierarchical reinforcement learning, one of the main issues encountered is the discovery of subgoal states or options (which are policies reaching subgoal states) by partitioning the environment in a meaningful way. This partitioning usually requires an expensive global clustering operation or eigendecomposition of the Laplacian of the states graph. We propose a local solution to this issue, much more efficient than algorithms using global information, which successfully discovers subgoal states by computing a simple function, which we call heterogeneity for each state as a function of its neighbors. Moreover, we construct a value function using the difference in heterogeneity from one step to the next, as reward, such that we are able to explore the state space much more efficiently than say epsilon-greedy. The same principle can then be applied to higher level of the hierarchy, where now states are subgoals discovered at the level below.

Keywords: exploration, hierarchical reinforcement learning, locality, options, value functions

Procedia PDF Downloads 171