Search results for: direct numerical simulation (DNS)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10385

Search results for: direct numerical simulation (DNS)

7115 Parameters Identification of Granular Soils around PMT Test by Inverse Analysis

Authors: Younes Abed

Abstract:

The successful application of in-situ testing of soils heavily depends on development of interpretation methods of tests. The pressuremeter test simulates the expansion of a cylindrical cavity and because it has well defined boundary conditions, it is more unable to rigorous theoretical analysis (i. e. cavity expansion theory) then most other in-situ tests. In this article, and in order to make the identification process more convenient, we propose a relatively simple procedure which involves the numerical identification of some mechanical parameters of a granular soil, especially, the elastic modulus and the friction angle from a pressuremeter curve. The procedure, applied here to identify the parameters of generalised prager model associated to the Drucker & Prager criterion from a pressuremeter curve, is based on an inverse analysis approach, which consists of minimizing the function representing the difference between the experimental curve and the curve obtained by integrating the model along the loading path in in-situ testing. The numerical process implemented here is based on the established finite element program. We present a validation of the proposed approach by a database of tests on expansion of cylindrical cavity. This database consists of four types of tests; thick cylinder tests carried out on the Hostun RF sand, pressuremeter tests carried out on the Hostun sand, in-situ pressuremeter tests carried out at the site of Fos with marine self-boring pressuremeter and in-situ pressuremeter tests realized on the site of Labenne with Menard pressuremeter.

Keywords: granular soils, cavity expansion, pressuremeter test, finite element method, identification procedure

Procedia PDF Downloads 288
7114 The Evolution of Strike and Intelligence Functions in Special Operations Forces

Authors: John Hardy

Abstract:

The expansion of special operations forces (SOF) in the twenty-first century is often discussed in terms of the size and disposition of SOF units. Research regarding the number SOF personnel, the equipment SOF units procure, and the variety of roles and mission that SOF fulfill in contemporary conflicts paints a fascinating picture of changing expectations for the use of force. A strong indicator of the changing nature of SOF in contemporary conflicts is the fusion of strike and intelligence functions in the SOF in many countries. What were once more distinct roles on the kind of battlefield generally associated with the concept of conventional warfare have become intermingled in the era of persistent conflict which SOF face. This study presents a historical analysis of the co-evolution of the intelligence and direct action functions carried out by SOF in counterterrorism, counterinsurgency, and training and mentoring missions between 2004 and 2016. The study focuses primarily on innovation in the US military and the diffusion of key concepts to US allies first, and then more broadly afterward. The findings show that there were three key phases of evolution throughout the period of study, each coinciding with a process of innovation and doctrinal adaptation. The first phase was characterized by the fusion of intelligence at the tactical and operational levels. The second phase was characterized by the industrial counterterrorism campaigns used by US SOF against irregular enemies in Iraq and Afghanistan. The third phase was characterized by increasing forward collection of actionable intelligence by SOF force elements in the course of direct action raids. The evolution of strike and intelligence functions in SOF operations between 2004 and 2016 was significantly influenced by reciprocity. Intelligence fusion led to more effective targeting, which then increased intelligence collection. Strike and intelligence functions were then enhanced by greater emphasis on intelligence exploitation during operations, which further increased the effectiveness of both strike and intelligence operations.

Keywords: counterinsurgency, counterterrorism, intelligence, irregular warfare, military operations, special operations forces

Procedia PDF Downloads 256
7113 Magnetic Navigation of Nanoparticles inside a 3D Carotid Model

Authors: E. G. Karvelas, C. Liosis, A. Theodorakakos, T. E. Karakasidis

Abstract:

Magnetic navigation of the drug inside the human vessels is a very important concept since the drug is delivered to the desired area. Consequently, the quantity of the drug required to reach therapeutic levels is being reduced while the drug concentration at targeted sites is increased. Magnetic navigation of drug agents can be achieved with the use of magnetic nanoparticles where anti-tumor agents are loaded on the surface of the nanoparticles. The magnetic field that is required to navigate the particles inside the human arteries is produced by a magnetic resonance imaging (MRI) device. The main factors which influence the efficiency of the usage of magnetic nanoparticles for biomedical applications in magnetic driving are the size and the magnetization of the biocompatible nanoparticles. In this study, a computational platform for the simulation of the optimal gradient magnetic fields for the navigation of magnetic nanoparticles inside a carotid artery is presented. For the propulsion model of the particles, seven major forces are considered, i.e., the magnetic force from MRIs main magnet static field as well as the magnetic field gradient force from the special propulsion gradient coils. The static field is responsible for the aggregation of nanoparticles, while the magnetic gradient contributes to the navigation of the agglomerates that are formed. Moreover, the contact forces among the aggregated nanoparticles and the wall and the Stokes drag force for each particle are considered, while only spherical particles are used in this study. In addition, gravitational forces due to gravity and the force due to buoyancy are included. Finally, Van der Walls force and Brownian motion are taken into account in the simulation. The OpenFoam platform is used for the calculation of the flow field and the uncoupled equations of particles' motion. To verify the optimal gradient magnetic fields, a covariance matrix adaptation evolution strategy (CMAES) is used in order to navigate the particles into the desired area. A desired trajectory is inserted into the computational geometry, which the particles are going to be navigated in. Initially, the CMAES optimization strategy provides the OpenFOAM program with random values of the gradient magnetic field. At the end of each simulation, the computational platform evaluates the distance between the particles and the desired trajectory. The present model can simulate the motion of particles when they are navigated by the magnetic field that is produced by the MRI device. Under the influence of fluid flow, the model investigates the effect of different gradient magnetic fields in order to minimize the distance of particles from the desired trajectory. In addition, the platform can navigate the particles into the desired trajectory with an efficiency between 80-90%. On the other hand, a small number of particles are stuck to the walls and remains there for the rest of the simulation.

Keywords: artery, drug, nanoparticles, navigation

Procedia PDF Downloads 101
7112 Extensions to Chen's Minimizing Equal Mass Paralellogram Solutions

Authors: Abdalla Manur, Daniel Offin, Alessandro Arsie

Abstract:

In this paper, we study the extension of the minimizing equal mass parallelogram solutions which was derived by Chen in 2001. Chen’s solution was minimizing for one quarter of the period [0; T], where numerical integration had been used in his proof. This paper focuses on extending the minimization property to intervals of time [0; 2T] and [0; 4T].

Keywords: action, Hamiltoian, N-body, symmetry

Procedia PDF Downloads 1679
7111 A Modified Nonlinear Conjugate Gradient Algorithm for Large Scale Unconstrained Optimization Problems

Authors: Tsegay Giday Woldu, Haibin Zhang, Xin Zhang, Yemane Hailu Fissuh

Abstract:

It is well known that nonlinear conjugate gradient method is one of the widely used first order methods to solve large scale unconstrained smooth optimization problems. Because of the low memory requirement, attractive theoretical features, practical computational efficiency and nice convergence properties, nonlinear conjugate gradient methods have a special role for solving large scale unconstrained optimization problems. Large scale optimization problems are with important applications in practical and scientific world. However, nonlinear conjugate gradient methods have restricted information about the curvature of the objective function and they are likely less efficient and robust compared to some second order algorithms. To overcome these drawbacks, the new modified nonlinear conjugate gradient method is presented. The noticeable features of our work are that the new search direction possesses the sufficient descent property independent of any line search and it belongs to a trust region. Under mild assumptions and standard Wolfe line search technique, the global convergence property of the proposed algorithm is established. Furthermore, to test the practical computational performance of our new algorithm, numerical experiments are provided and implemented on the set of some large dimensional unconstrained problems. The numerical results show that the proposed algorithm is an efficient and robust compared with other similar algorithms.

Keywords: conjugate gradient method, global convergence, large scale optimization, sufficient descent property

Procedia PDF Downloads 194
7110 Displacement Solution for a Static Vertical Rigid Movement of an Interior Circular Disc in a Transversely Isotropic Tri-Material Full-Space

Authors: D. Mehdizadeh, M. Rahimian, M. Eskandari-Ghadi

Abstract:

This article is concerned with the determination of the static interaction of a vertically loaded rigid circular disc embedded at the interface of a horizontal layer sandwiched in between two different transversely isotropic half-spaces called as tri-material full-space. The axes of symmetry of different regions are assumed to be normal to the horizontal interfaces and parallel to the movement direction. With the use of a potential function method, and by implementing Hankel integral transforms in the radial direction, the government partial differential equation for the solely scalar potential function is transformed to an ordinary 4th order differential equation, and the mixed boundary conditions are transformed into a pair of integral equations called dual integral equations, which can be reduced to a Fredholm integral equation of the second kind, which is solved analytically. Then, the displacements and stresses are given in the form of improper line integrals, which is due to inverse Hankel integral transforms. It is shown that the present solutions are in exact agreement with the existing solutions for a homogeneous full-space with transversely isotropic material. To confirm the accuracy of the numerical evaluation of the integrals involved, the numerical results are compared with the solutions exists for the homogeneous full-space. Then, some different cases with different degrees of material anisotropy are compared to portray the effect of degree of anisotropy.

Keywords: transversely isotropic, rigid disc, elasticity, dual integral equations, tri-material full-space

Procedia PDF Downloads 432
7109 Conservation Importance of Independent Smallholdings in Safeguarding Biodiversity in Oil Palm Plantations

Authors: Arzyana Sunkar, Yanto Santosa

Abstract:

The expansions of independent smallholdings in Indonesia are feared to increase the negative ecological impacts of oil palm plantation on biodiversity. Hence, research is required to identify the conservation importance of independent smallholder oil palm plantations on biodiversity. This paper discussed the role of independent smallholdings in the conservation of biodiversity in oil palm plantations and to compare it with High Conservation Value Forest as a conservation standard of RSPO. The research was conducted from March to April 2016. Data on biodiversity were collected on 16 plantations and 8 private oil palm plantations in the Districts of Kampar, Pelalawan, Kuantan, Singingi and Siak of Riau Province, Indonesia. In addition, data on community environmental perceptions of both smallholder plantation and High Conservation Value (HCV) Forest were also collected. Species that were observed were birds and earthworms. Data on birds were collected using transect method, while identification of earthworm was determine by taking some soil samples and counting the number of individual earthworm found for each worm species. The research used direct interview with oil palm owners and community members, as well as direct observation to examine the environmental conditions of each plantation. In general, field observation and measurement have found that birds species richness was higher in the forested HCV Forest. Nevertheless, if compared to non-forested HCV, bird’s species richness was higher in the independent smallholdings. On the other hand, different results were observed for earthworm, where the density was higher in the independent smallholdings than in the HCV. It can be concluded from this research that managing independent smallholder oil palm plantations and forested HCV forest could enhance biodiversity conservation. The results of this study justified the importance of retaining forested area to safeguard biodiversity in oil palm plantation.

Keywords: biodiversity conservation, high conservation value forest, independent smallholdings, oil palm plantations

Procedia PDF Downloads 216
7108 A Crop Growth Subroutine for Watershed Resources Management (WRM) Model 1: Description

Authors: Kingsley Nnaemeka Ogbu, Constantine Mbajiorgu

Abstract:

Vegetation has a marked effect on runoff and has become an important component in hydrologic model. The watershed Resources Management (WRM) model, a process-based, continuous, distributed parameter simulation model developed for hydrologic and soil erosion studies at the watershed scale lack a crop growth component. As such, this model assumes a constant parameter values for vegetation and hydraulic parameters throughout the duration of hydrologic simulation. Our approach is to develop a crop growth algorithm based on the original plant growth model used in the Environmental Policy Integrated Climate Model (EPIC) model. This paper describes the development of a single crop growth model which has the capability of simulating all crops using unique parameter values for each crop. Simulated crop growth processes will reflect the vegetative seasonality of the natural watershed system. An existing model was employed for evaluating vegetative resistance by hydraulic and vegetative parameters incorporated into the WRM model. The improved WRM model will have the ability to evaluate the seasonal variation of the vegetative roughness coefficient with depth of flow and further enhance the hydrologic model’s capability for accurate hydrologic studies.

Keywords: runoff, roughness coefficient, PAR, WRM model

Procedia PDF Downloads 367
7107 Set-point Performance Evaluation of Robust ‎Back-Stepping Control Design for a Nonlinear ‎Electro-‎Hydraulic Servo System

Authors: Maria Ahmadnezhad, Seyedgharani Ghoreishi ‎

Abstract:

Electrohydraulic servo system have been used in industry in a wide ‎number of applications. Its ‎dynamics are highly nonlinear and also ‎have large extent of model uncertainties and external ‎disturbances. ‎In this thesis, a robust back-stepping control (RBSC) scheme is ‎proposed to overcome ‎the problem of disturbances and system ‎uncertainties effectively and to improve the set-point ‎performance ‎of EHS systems. In order to implement the proposed control ‎scheme, the system ‎uncertainties in EHS systems are considered as ‎total leakage coefficient and effective oil volume. In ‎addition, in ‎order to obtain the virtual controls for stabilizing system, the ‎update rule for the ‎system uncertainty term is induced by the ‎Lyapunov control function (LCF). To verify the ‎performance and ‎robustness of the proposed control system, computer simulation of ‎the ‎proposed control system using Matlab/Simulink Software is ‎executed. From the computer ‎simulation, it was found that the ‎RBSC system produces the desired set-point performance and ‎has ‎robustness to the disturbances and system uncertainties of ‎EHS systems.‎

Keywords: electro hydraulic servo system, back-stepping control, robust back-‎stepping control, Lyapunov redesign‎

Procedia PDF Downloads 990
7106 Model Based Design of Fly-by-Wire Flight Controls System of a Fighter Aircraft

Authors: Nauman Idrees

Abstract:

Modeling and simulation during the conceptual design phase are the most effective means of system testing resulting in time and cost savings as compared to the testing of hardware prototypes, which are mostly not available during the conceptual design phase. This paper uses the model-based design (MBD) method in designing the fly-by-wire flight controls system of a fighter aircraft using Simulink. The process begins with system definition and layout where modeling requirements and system components were identified, followed by hierarchical system layout to identify the sequence of operation and interfaces of system with external environment as well as the internal interface between the components. In the second step, each component within the system architecture was modeled along with its physical and functional behavior. Finally, all modeled components were combined to form the fly-by-wire flight controls system of a fighter aircraft as per system architecture developed. The system model developed using this method can be simulated using any simulation software to ensure that desired requirements are met even without the development of a physical prototype resulting in time and cost savings.

Keywords: fly-by-wire, flight controls system, model based design, Simulink

Procedia PDF Downloads 111
7105 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability

Procedia PDF Downloads 204
7104 A Hybrid Multi-Pole Fe₇₈Si₁₃B₉+FeSi₃ Soft Magnetic Core for Application in the Stators of the Low-Power Permanent Magnet Brushless Direct Current Motors

Authors: P. Zackiewicz, M. Hreczka, R. Kolano, A. Kolano-Burian

Abstract:

New types of materials applied as the stators in the Permanent Magnet Brushless Direct Current motors used in the heart supporting pumps are presented. The main focus of this work is the research on the fabrication of a hybrid nine-pole soft magnetic core consisting of a soft magnetic carrier ring with rectangular notches, made from the FeSi3 strip, and nine soft magnetic poles. This soft magnetic core is made in three stages: (a) preparation of the carrier rings from soft magnetic material with the lowest possible power losses and suitable stiffness, (b) preparation of trapezoidal soft magnetic poles from Metglas 2605 SA1 type ribbons, and (c) making durable connection between the poles and the carrier ring, capable of withstanding a four-times greater tearing force than that present during normal operation of the motor pump. All magnetic properties measurements were made using Remacomp C-1200 (Magnet Physik, Germany) and 450 Gaussometer (Lake Shore, USA) and the electrical characteristics were measured using laboratory generator DF1723009TC (NDN, Poland). Specific measurement techniques used to determine properties of the hybrid cores were presented. Obtained results allow developing the fabrication technology with an account of the intended application of these cores in the stators of the low-power PMBLDC motors used in implanted heart operation supporting pumps. The proposed measurement methodology is appropriate for assessing the quality of the stators.

Keywords: amorphous materials, heart supporting pump, PMBLDC motor, soft magnetic materials

Procedia PDF Downloads 207
7103 Near Shore Wave Manipulation for Electricity Generation

Authors: K. D. R. Jagath-Kumara, D. D. Dias

Abstract:

The sea waves carry thousands of GWs of power globally. Although there are a number of different approaches to harness offshore energy, they are likely to be expensive, practically challenging and vulnerable to storms. Therefore, this paper considers using the near shore waves for generating mechanical and electrical power. It introduces two new approaches, the wave manipulation and using a variable duct turbine, for intercepting very wide wave fronts and coping with the fluctuations of the wave height and the sea level, respectively. The first approach effectively allows capturing much more energy yet with a much narrower turbine rotor. The second approach allows using a rotor with a smaller radius but captures energy of higher wave fronts at higher sea levels yet preventing it from totally submerging. To illustrate the effectiveness of the approach, the paper contains a description and the simulation results of a scale model of a wave manipulator. Then, it includes the results of testing a physical model of the manipulator and a single duct, axial flow turbine, in a wave flume in the laboratory. The paper also includes comparisons of theoretical predictions, simulation results and wave flume tests with respect to the incident energy, loss in wave manipulation, minimal loss, brake torque and the angular velocity.

Keywords: near-shore sea waves, renewable energy, wave energy conversion, wave manipulation

Procedia PDF Downloads 474
7102 Socioeconomic Impact of Marine Invertebrates Collection on Chuiba and Maringanha Beaches

Authors: Siran Offman, Hermes Pacule, Teofilo Nhamuhuco

Abstract:

Marine invertebrates are very important for the livelihood of coastal communities, particularly in Pemba City. The study was conducted From June 2011 to March 2012. The aim of this study is to determine the socioeconomic impact of collecting marine invertebrates in communities and Chuiba Maringanha. Data were collected biweekly during the spring tide ebb in the intertidal zone, and through structured surveys, the confrontation of data was done through direct observation in the neighborhoods. In total 40 collectors was surveyed and it was found that activity of collecting marine invertebrates is practiced by women 57.2% and men 42.5%. Their ages ranged from 9 to 45 years, and the range was 25-32 dominant with 30.5% and collection practice 5-7 times per week they spend about 4-6 hours a day. The collection methods are direct harvesting by hand aided by knives, sharp irons, and transport use pots, buckets, basins, shawls. Were identified in total 8 marketable species namely: Octopus vulgaris 8.6 Kg, Cyprea Tigers 7 units, Cypraea annulus 48 kg, 40 kg holuturias, Cyprea bully, Atrina vexilium 10 kg, Modiulus philiphinarum and lambis lambis. The species with the greatest economic value are sea cucumber (3 Usd/ kg) and Octopus vulgaris ( 2.5 Usd/ kg) more commercialized. The socio-economic impacts on communities of collectors the average income of collectors varies from 0.5 to 5 Usd/ day and the money are intended to purchase food and agricultural instruments. The other socioeconomics impacts are illiteracy with 36% dropout, and 28% have never studied 87% of unemployed collectors, a high number of family members, weak economic power, poor housing made the basis of local materials and relies on community wells to access water, and most do not have electric power.

Keywords: socio-economic, impacts, collecting marine invertebrates, communities

Procedia PDF Downloads 309
7101 Influence of a Cationic Membrane in a Double Compartment Filter-Press Reactor on the Atenolol Electro-Oxidation

Authors: Alan N. A. Heberle, Salatiel W. Da Silva, Valentin Perez-Herranz, Andrea M. Bernardes

Abstract:

Contaminants of emerging concern are substances widely used, such as pharmaceutical products. These compounds represent risk for both wild and human life since they are not completely removed from wastewater by conventional wastewater treatment plants. In the environment, they can be harm even in low concentration (µ or ng/L), causing bacterial resistance, endocrine disruption, cancer, among other harmful effects. One of the most common taken medicine to treat cardiocirculatory diseases is the Atenolol (ATL), a β-Blocker, which is toxic to aquatic life. In this way, it is necessary to implement a methodology, which is capable to promote the degradation of the ATL, to avoid the environmental detriment. A very promising technology is the advanced electrochemical oxidation (AEO), which mechanisms are based on the electrogeneration of reactive radicals (mediated oxidation) and/or on the direct substance discharge by electron transfer from contaminant to electrode surface (direct oxidation). The hydroxyl (HO•) and sulfate (SO₄•⁻) radicals can be generated, depending on the reactional medium. Besides that, at some condition, the peroxydisulfate (S₂O₈²⁻) ion is also generated from the SO₄• reaction in pairs. Both radicals, ion, and the direct contaminant discharge can break down the molecule, resulting in the degradation and/or mineralization. However, ATL molecule and byproducts can still remain in the treated solution. On this wise, some efforts can be done to implement the AEO process, being one of them the use of a cationic membrane to separate the cathodic (reduction) from the anodic (oxidation) reactor compartment. The aim of this study is investigate the influence of the implementation of a cationic membrane (Nafion®-117) to separate both cathodic and anodic, AEO reactor compartments. The studied reactor was a filter-press, with bath recirculation mode, flow 60 L/h. The anode was an Nb/BDD2500 and the cathode a stainless steel, both bidimensional, geometric surface area 100 cm². The solution feeding the anodic compartment was prepared with ATL 100 mg/L using Na₂SO₄ 4 g/L as support electrolyte. In the cathodic compartment, it was used a solution containing Na₂SO₄ 71 g/L. Between both solutions was placed the membrane. The applied currents densities (iₐₚₚ) of 5, 20 and 40 mA/cm² were studied over 240 minutes treatment time. Besides that, the ATL decay was analyzed by ultraviolet spectroscopy (UV/Vis). The mineralization was determined performing total organic carbon (TOC) in TOC-L CPH Shimadzu. In the cases without membrane, the iₐₚₚ 5, 20 and 40 mA/cm² resulted in 55, 87 and 98 % ATL degradation at the end of treatment time, respectively. However, with membrane, the degradation, for the same iₐₚₚ, was 90, 100 and 100 %, spending 240, 120, 40 min for the maximum degradation, respectively. The mineralization, without membrane, for the same studied iₐₚₚ, was 40, 55 and 72 %, respectively at 240 min, but with membrane, all tested iₐₚₚ reached 80 % of mineralization, differing only in the time spent, 240, 150 and 120 min, for the maximum mineralization, respectively. The membrane increased the ATL oxidation, probably due to avoid oxidant ions (S₂O₈²⁻) reduction on the cathode surface.

Keywords: contaminants of emerging concern, advanced electrochemical oxidation, atenolol, cationic membrane, double compartment reactor

Procedia PDF Downloads 124
7100 Application of the Finite Window Method to a Time-Dependent Convection-Diffusion Equation

Authors: Raoul Ouambo Tobou, Alexis Kuitche, Marcel Edoun

Abstract:

The FWM (Finite Window Method) is a new numerical meshfree technique for solving problems defined either in terms of PDEs (Partial Differential Equation) or by a set of conservation/equilibrium laws. The principle behind the FWM is that in such problem each element of the concerned domain is interacting with its neighbors and will always try to adapt to keep in equilibrium with respect to those neighbors. This leads to a very simple and robust problem solving scheme, well suited for transfer problems. In this work, we have applied the FWM to an unsteady scalar convection-diffusion equation. Despite its simplicity, it is well known that convection-diffusion problems can be challenging to be solved numerically, especially when convection is highly dominant. This has led researchers to set the scalar convection-diffusion equation as a benchmark one used to analyze and derive the required conditions or artifacts needed to numerically solve problems where convection and diffusion occur simultaneously. We have shown here that the standard FWM can be used to solve convection-diffusion equations in a robust manner as no adjustments (Upwinding or Artificial Diffusion addition) were required to obtain good results even for high Peclet numbers and coarse space and time steps. A comparison was performed between the FWM scheme and both a first order implicit Finite Volume Scheme (Upwind scheme) and a third order implicit Finite Volume Scheme (QUICK Scheme). The results of the comparison was that for equal space and time grid spacing, the FWM yields a much better precision than the used Finite Volume schemes, all having similar computational cost and conditioning number.

Keywords: Finite Window Method, Convection-Diffusion, Numerical Technique, Convergence

Procedia PDF Downloads 326
7099 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 237
7098 Using Biopolymer Materials to Enhance Sandy Soil Behavior

Authors: Mohamed Ayeldeen, Abdelazim Negm

Abstract:

Nowadays, strength characteristics of soils have more importance due to increasing building loads. In some projects, geotechnical properties of the soils are be improved using man-made materials varying from cement-based to chemical-based. These materials have proven successful in improving the engineering properties of the soil such as shear strength, compressibility, permeability, bearing capacity etc.. However, the use of these artificial injection formulas often modifies the pH level of soil, contaminates soil and groundwater. This is attributed to their toxic and hazardous characteristics. Recently, an environmentally friendly soil treatment method or Biological Treatment Method (BTM) was to bond particles of loose sandy soils. This research paper presents the preliminary results of using biopolymers for strengthening cohesionless soil. Xanthan gum was identified for further study over a range of concentrations varying from 0.25% to 2.00%. Xanthan gum is a polysaccharide secreted by the bacterium Xanthomonas campestris, used as a food additive and it is a nontoxic material. A series of direct shear, unconfined compressive strength, and permeability tests were carried out to investigate the behavior of sandy soil treated with Xanthan gum with different concentration ratios and at different curing times. Laser microscopy imaging was also conducted to study the microstructure of the treated sand. Experimental results demonstrated the compatibility of Xanthan gum to improve the geotechnical properties of sandy soil. Depending on the biopolymer concentration, it was observed that the biopolymers effectively increased the cohesion intercept and stiffness of the treated sand and reduced the permeability of sand. The microscopy imaging indicates that the cross-links of the biopolymers through and over the soil particles increase with the increase of the biopolymer concentration.

Keywords: biopolymer, direct shear, permeability, sand, shear strength, Xanthan gum

Procedia PDF Downloads 268
7097 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event

Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette

Abstract:

Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.

Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas

Procedia PDF Downloads 72
7096 A Study on the False Alarm Rates of MEWMA and MCUSUM Control Charts When the Parameters Are Estimated

Authors: Umar Farouk Abbas, Danjuma Mustapha, Hamisu Idi

Abstract:

It is now a known fact that quality is an important issue in manufacturing industries. A control chart is an integrated and powerful tool in statistical process control (SPC). The mean µ and standard deviation σ parameters are estimated. In general, the multivariate exponentially weighted moving average (MEWMA) and multivariate cumulative sum (MCUSUM) are used in the detection of small shifts in joint monitoring of several correlated variables; the charts used information from past data which makes them sensitive to small shifts. The aim of the paper is to compare the performance of Shewhart xbar, MEWMA, and MCUSUM control charts in terms of their false rates when parameters are estimated with autocorrelation. A simulation was conducted in R software to generate the average run length (ARL) values of each of the charts. After the analysis, the results show that a comparison of the false alarm rates of the charts shows that MEWMA chart has lower false alarm rates than the MCUSUM chart at various levels of parameter estimated to the number of ARL0 (in control) values. Also noticed was that the sample size has an advert effect on the false alarm of the control charts.

Keywords: average run length, MCUSUM chart, MEWMA chart, false alarm rate, parameter estimation, simulation

Procedia PDF Downloads 208
7095 Effect of Different Porous Media Models on Drug Delivery to Solid Tumors: Mathematical Approach

Authors: Mostafa Sefidgar, Sohrab Zendehboudi, Hossein Bazmara, Madjid Soltani

Abstract:

Based on findings from clinical applications, most drug treatments fail to eliminate malignant tumors completely even though drug delivery through systemic administration may inhibit their growth. Therefore, better understanding of tumor formation is crucial in developing more effective therapeutics. For this purpose, nowadays, solid tumor modeling and simulation results are used to predict how therapeutic drugs are transported to tumor cells by blood flow through capillaries and tissues. A solid tumor is investigated as a porous media for fluid flow simulation. Most of the studies use Darcy model for porous media. In Darcy model, the fluid friction is neglected and a few simplified assumptions are implemented. In this study, the effect of these assumptions is studied by considering Brinkman model. A multi scale mathematical method which calculates fluid flow to a solid tumor is used in this study to investigate how neglecting fluid friction affects the solid tumor simulation. In this work, the mathematical model in our previous studies is developed by considering two model of momentum equation for porous media: Darcy and Brinkman. The mathematical method involves processes such as fluid flow through solid tumor as porous media, extravasation of blood flow from vessels, blood flow through vessels and solute diffusion, convective transport in extracellular matrix. The sprouting angiogenesis model is used for generating capillary network and then fluid flow governing equations are implemented to calculate blood flow through the tumor-induced capillary network. Finally, the two models of porous media are used for modeling fluid flow in normal and tumor tissues in three different shapes of tumors. Simulations of interstitial fluid transport in a solid tumor demonstrate that the simplifications used in Darcy model affect the interstitial velocity and Brinkman model predicts a lower value for interstitial velocity than the values that Darcy model does.

Keywords: solid tumor, porous media, Darcy model, Brinkman model, drug delivery

Procedia PDF Downloads 292
7094 Copula Autoregressive Methodology for Simulation of Solar Irradiance and Air Temperature Time Series for Solar Energy Forecasting

Authors: Andres F. Ramirez, Carlos F. Valencia

Abstract:

The increasing interest in renewable energies strategies application and the path for diminishing the use of carbon related energy sources have encouraged the development of novel strategies for integration of solar energy into the electricity network. A correct inclusion of the fluctuating energy output of a photovoltaic (PV) energy system into an electric grid requires improvements in the forecasting and simulation methodologies for solar energy potential, and the understanding not only of the mean value of the series but the associated underlying stochastic process. We present a methodology for synthetic generation of solar irradiance (shortwave flux) and air temperature bivariate time series based on copula functions to represent the cross-dependence and temporal structure of the data. We explore the advantages of using this nonlinear time series method over traditional approaches that use a transformation of the data to normal distributions as an intermediate step. The use of copulas gives flexibility to represent the serial variability of the real data on the simulation and allows having more control on the desired properties of the data. We use discrete zero mass density distributions to assess the nature of solar irradiance, alongside vector generalized linear models for the bivariate time series time dependent distributions. We found that the copula autoregressive methodology used, including the zero mass characteristics of the solar irradiance time series, generates a significant improvement over state of the art strategies. These results will help to better understand the fluctuating nature of solar energy forecasting, the underlying stochastic process, and quantify the potential of a photovoltaic (PV) energy generating system integration into a country electricity network. Experimental analysis and real data application substantiate the usage and convenience of the proposed methodology to forecast solar irradiance time series and solar energy across northern hemisphere, southern hemisphere, and equatorial zones.

Keywords: copula autoregressive, solar irradiance forecasting, solar energy forecasting, time series generation

Procedia PDF Downloads 312
7093 NanoSat MO Framework: Simulating a Constellation of Satellites with Docker Containers

Authors: César Coelho, Nikolai Wiegand

Abstract:

The advancement of nanosatellite technology has opened new avenues for cost-effective and faster space missions. The NanoSat MO Framework (NMF) from the European Space Agency (ESA) provides a modular and simpler approach to the development of flight software and operations of small satellites. This paper presents a methodology using the NMF together with Docker for simulating constellations of satellites. By leveraging Docker containers, the software environment of individual satellites can be easily replicated within a simulated constellation. This containerized approach allows for rapid deployment, isolation, and management of satellite instances, facilitating comprehensive testing and development in a controlled setting. By integrating the NMF lightweight simulator in the container, a comprehensive simulation environment was achieved. A significant advantage of using Docker containers is their inherent scalability, enabling the simulation of hundreds or even thousands of satellites with minimal overhead. Docker's lightweight nature ensures efficient resource utilization, allowing for deployment on a single host or across a cluster of hosts. This capability is crucial for large-scale simulations, such as in the case of mega-constellations, where multiple traditional virtual machines would be impractical due to their higher resource demands. This ability for easy horizontal scaling based on the number of simulated satellites provides tremendous flexibility to different mission scenarios. Our results demonstrate that leveraging Docker containers with the NanoSat MO Framework provides a highly efficient and scalable solution for simulating satellite constellations, offering not only significant benefits in terms of resource utilization and operational flexibility but also enabling testing and validation of ground software for constellations. The findings underscore the importance of taking advantage of already existing technologies in computer science to create new solutions for future satellite constellations in space.

Keywords: containerization, docker containers, NanoSat MO framework, satellite constellation simulation, scalability, small satellites

Procedia PDF Downloads 34
7092 Light, Restorativeness and Performance in the Workplace: A Pilot Study

Authors: D. Scarpanti, M. Brondino, M. Pasini

Abstract:

Background: the present study explores the role of light and restorativeness on work. According with the Attention Restoration Theory (ART) and a Model of Work Environment, the main idea is that some features of environment, i.e., lighting, influences the direct attention, and so, the performance. Restorativeness refers to the presence/absence level of all the characteristics of physical environment that help to regenerate direct attention. Specifically, lighting can affect level of fascination and attention in one hand; and in other hand promotes several biological functions via pineal gland. Different reviews on this topic show controversial results. In order to bring light on this topic, the hypotheses of this study are that lighting can affect the construct of restorativeness and, in the second time, the restorativeness can affect the performance. Method: the participants are 30 workers of a mechatronic company in the North Italy. Every subject answered to a questionnaire valuing their subjective perceptions of environment in a different way: some objective features of environment, like lighting, temperature and air quality; some subjective perceptions of this environment; finally, the participants answered about their perceived performance. The main attention is on the features of light and his components: visual comfort, general preferences and pleasantness; and the dimensions of the construct of restorativeness; fascination, coherence and being away. The construct of performance per se is conceptualized in three level: individual, team membership and organizational membership; and in three different components: proficiency, adaptability, and proactivity, for a total of 9 subcomponents. Findings: path analysis showed that some characteristics of lighting respectively affected the dimension of fascination; and, as expected, the dimension of fascination affected work performance. Conclusions: The present study is a first pilot step of a wide research. These first results can be summarized with the statement that lighting and restorativeness contribute to explain work performance variability: in details perceptions of visual comfort, satisfaction and pleasantness, and fascination respectively. Results related to fascination are particularly interesting because fascination is conceptualized as the opposite of the construct of direct attention. The main idea is, in order to regenerate attentional capacity, it’s necessary to provide a lacking of attention (fascination). The sample size did not permit to test simultaneously the role of the perceived characteristics of light to see how they differently contribute to predict fascination of the work environment. However, the results highlighted the important role that light could have in predicting restorativeness dimensions and probably with a larger sample we could find larger effects also on work performance. Furthermore, longitudinal data will contribute to better analyze the causal model along time. Applicative implications: the present pilot study highlights the relevant role of lighting and perceived restorativeness in the work environment and the importance to focus attention on light features and the restorative characteristics in the design of work environments.

Keywords: lighting, performance, restorativeness, workplace

Procedia PDF Downloads 149
7091 3D Non-Linear Analyses by Using Finite Element Method about the Prediction of the Cracking in Post-Tensioned Dapped-End Beams

Authors: Jatziri Y. Moreno-Martínez, Arturo Galván, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado

Abstract:

In recent years, for the elevated viaducts in Mexico City, a construction system based on precast/pre-stressed concrete elements has been used, in which the bridge girders are divided in two parts by imposing a hinged support in sections where the bending moments that are originated by the gravity loads in a continuous beam are minimal. Precast concrete girders with dapped ends are a representative sample of a behavior that has complex configurations of stresses that make them more vulnerable to cracking due to flexure–shear interaction. The design procedures for ends of the dapped girders are well established and are based primarily on experimental tests performed for different configurations of reinforcement. The critical failure modes that can govern the design have been identified, and for each of them, the methods for computing the reinforcing steel that is needed to achieve adequate safety against failure have been proposed. Nevertheless, the design recommendations do not include procedures for controlling diagonal cracking at the entrant corner under service loading. These cracks could cause water penetration and degradation because of the corrosion of the steel reinforcement. The lack of visual access to the area makes it difficult to detect this damage and take timely corrective actions. Three-dimensional non-linear numerical models based on Finite Element Method to study the cracking at the entrant corner of dapped-end beams were performed using the software package ANSYS v. 11.0. The cracking was numerically simulated by using the smeared crack approach. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The longitudinal post-tension was modeled using LINK8 elements with multilinear isotropic hardening behavior using von Misses plasticity. The reinforcement was introduced with smeared approach. The numerical models were calibrated using experimental tests carried out in “Instituto de Ingeniería, Universidad Nacional Autónoma de México”. In these numerical models the characteristics of the specimens were considered: typical solution based on vertical stirrups (hangers) and on vertical and horizontal hoops with a post-tensioned steel which contributed to a 74% of the flexural resistance. The post-tension is given by four steel wires with a 5/8’’ (16 mm) diameter. Each wire was tensioned to 147 kN and induced an average compressive stress of 4.90 MPa on the concrete section of the dapped end. The loading protocol consisted on applying symmetrical loading to reach the service load (180 kN). Due to the good correlation between experimental and numerical models some additional numerical models were proposed by considering different percentages of post-tension in order to find out how much it influences in the appearance of the cracking in the reentrant corner of the dapped-end beams. It was concluded that the increasing of percentage of post-tension decreases the displacements and the cracking in the reentrant corner takes longer to appear. The authors acknowledge at “Universidad de Guanajuato, Campus Celaya-Salvatierra” and the financial support of PRODEP-SEP (UGTO-PTC-460) of the Mexican government. The first author acknowledges at “Instituto de Ingeniería, Universidad Nacional Autónoma de México”.

Keywords: concrete dapped-end beams, cracking control, finite element analysis, postension

Procedia PDF Downloads 210
7090 Optical Simulation of HfO₂ Film - Black Silicon Structures for Solar Cells Applications

Authors: Gagik Ayvazyan, Levon Hakhoyan, Surik Khudaverdyan, Laura Lakhoyan

Abstract:

Black Si (b-Si) is a nano-structured Si surface formed by a self-organized, maskless process with needle-like surfaces discernible by their black color. The combination of low reflectivity and the semi-conductive properties of Si found in b-Si make it a prime candidate for application in solar cells as an antireflection surface. However, surface recombination losses significantly reduce the efficiency of b-Si solar cells. Surface passivation using suitable dielectric films can minimize these losses. Nowadays some works have demonstrated that excellent passivation of b-Si nanostructures can be reached using Al₂O₃ films. However, the negative fixed charge present in Al₂O₃ films should provide good field effect passivation only for p- and p+-type Si surfaces. HfO2 thin films have not been practically tested for passivation of b-Si. HfO₂ could provide an alternative for n- and n+- type Si surface passivation since it has been shown to exhibit positive fixed charge. Using optical simulation by Finite-Difference Time Domain (FDTD) method, the possibility of b-Si passivation by HfO2 films has been analyzed. The FDTD modeling revealed that b-Si layers with HfO₂ films effectively suppress reflection in the wavelength range 400–1000 nm and across a wide range of incidence angles. The light-trapping performance primarily depends on geometry of the needles and film thickness. With the decrease of periodicity and increase of height of the needles, the reflectance decrease significantly, and the absorption increases significantly. Increase in thickness results in an even greater decrease in the calculated reflection coefficient of model structures and, consequently, to an improvement in the antireflection characteristics in the visible range. The excellent surface passivation and low reflectance results prove the potential of using the combination of the b-Si surface and the HfO₂ film for solar cells applications.

Keywords: antireflection, black silicon, HfO₂, passivation, simulation, solar cell

Procedia PDF Downloads 137
7089 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation

Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim

Abstract:

In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.

Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement

Procedia PDF Downloads 113
7088 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 376
7087 On the Cluster of the Families of Hybrid Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

Over the years, kernel density estimation has been extensively studied within the context of nonparametric density estimation. The fundamental components of kernel density estimation are the kernel function and the bandwidth. While the mathematical exploration of the kernel component has been relatively limited, its selection and development remain crucial. The Mean Integrated Squared Error (MISE), serving as a measure of discrepancy, provides a robust framework for assessing the effectiveness of any kernel function. A kernel function with a lower MISE is generally considered to perform better than one with a higher MISE. Hence, the primary aim of this article is to create kernels that exhibit significantly reduced MISE when compared to existing classical kernels. Consequently, this article introduces a cluster of hybrid polynomial kernel families. The construction of these proposed kernel functions is carried out heuristically by combining two kernels from the classical polynomial kernel family using probability axioms. We delve into the analysis of error propagation within these kernels. To assess their performance, simulation experiments, and real-life datasets are employed. The obtained results demonstrate that the proposed hybrid kernels surpass their classical kernel counterparts in terms of performance.

Keywords: classical polynomial kernels, cluster of families, global error, hybrid Kernels, Kernel density estimation, Monte Carlo simulation

Procedia PDF Downloads 82
7086 FPGA Based Vector Control of PM Motor Using Sliding Mode Observer

Authors: Hanan Mikhael Dawood, Afaneen Anwer Abood Al-Khazraji

Abstract:

The paper presents an investigation of field oriented control strategy of Permanent Magnet Synchronous Motor (PMSM) based on hardware in the loop simulation (HIL) over a wide speed range. A sensorless rotor position estimation using sliding mode observer for permanent magnet synchronous motor is illustrated considering the effects of magnetic saturation between the d and q axes. The cross saturation between d and q axes has been calculated by finite-element analysis. Therefore, the inductance measurement regards the saturation and cross saturation which are used to obtain the suitable id-characteristics in base and flux weakening regions. Real time matrix multiplication in Field Programmable Gate Array (FPGA) using floating point number system is used utilizing Quartus-II environment to develop FPGA designs and then download these designs files into development kit. dSPACE DS1103 is utilized for Pulse Width Modulation (PWM) switching and the controller. The hardware in the loop results conducted to that from the Matlab simulation. Various dynamic conditions have been investigated.

Keywords: magnetic saturation, rotor position estimation, sliding mode observer, hardware in the loop (HIL)

Procedia PDF Downloads 520