Search results for: uncertainty simulation
1605 A Robust Model Predictive Control for a Photovoltaic Pumping System Subject to Actuator Saturation Nonlinearity and Parameter Uncertainties: A Linear Matrix Inequality Approach
Authors: Sofiane Bououden, Ilyes Boulkaibet
Abstract:
In this paper, a robust model predictive controller (RMPC) for uncertain nonlinear system under actuator saturation is designed to control a DC-DC buck converter in PV pumping application, where this system is subject to actuator saturation and parameter uncertainties. The considered nonlinear system contains a linear constant part perturbed by an additive state-dependent nonlinear term. Based on the saturating actuator property, an appropriate linear feedback control law is constructed and used to minimize an infinite horizon cost function within the framework of linear matrix inequalities. The proposed approach has successfully provided a solution to the optimization problem that can stabilize the nonlinear plants. Furthermore, sufficient conditions for the existence of the proposed controller guarantee the robust stability of the system in the presence of polytypic uncertainties. In addition, the simulation results have demonstrated the efficiency of the proposed control scheme.Keywords: PV pumping system, DC-DC buck converter, robust model predictive controller, nonlinear system, actuator saturation, linear matrix inequality
Procedia PDF Downloads 1811604 Numerical Simulation of Unsteady Natural Convective Nanofluid Flow within a Trapezoidal Enclosure Using Meshfree Method
Authors: S. Nandal, R. Bhargava
Abstract:
The paper contains a numerical study of the unsteady magneto-hydrodynamic natural convection flow of nanofluids within a symmetrical wavy walled trapezoidal enclosure. The length and height of enclosure are both considered equal to L. Two-phase nanofluid model is employed. The governing equations of nanofluid flow along with boundary conditions are non-dimensionalized and are solved using one of Meshfree technique (EFGM method). Meshfree numerical technique does not require a predefined mesh for discretization purpose. The bottom wavy wall of the enclosure is defined using a cosine function. Element free Galerkin method (EFGM) does not require the domain. The effects of various parameters namely time t, amplitude of bottom wavy wall a, Brownian motion parameter Nb and thermophoresis parameter Nt is examined on rate of heat and mass transfer to get a visualization of cooling and heating effects. Such problems have important applications in heat exchangers or solar collectors, as wavy walled enclosures enhance heat transfer in comparison to flat walled enclosures.Keywords: heat transfer, meshfree methods, nanofluid, trapezoidal enclosure
Procedia PDF Downloads 1581603 Modelling a Distribution Network with a Hybrid Solar-Hydro Power Plant in Rural Cameroon
Authors: Contimi Kenfack Mouafo, Sebastian Klick
Abstract:
In the rural and remote areas of Cameroon, access to electricity is very limited since most of the population is not connected to the main utility grid. Throughout the country, efforts are underway to not only expand the utility grid to these regions but also to provide reliable off-grid access to electricity. The Cameroonian company Solahydrowatt is currently working on the design and planning of one of the first hybrid solar-hydropower plants of Cameroon in Fotetsa, in the western region of the country, to provide the population with reliable access to electricity. This paper models and proposes a design for the low-voltage network with a hybrid solar-hydropower plant in Fotetsa. The modelling takes into consideration the voltage compliance of the distribution network, the maximum load of operating equipment, and most importantly, the ability for the network to operate as an off-grid system. The resulting modelled distribution network does not only comply with the Cameroonian voltage deviation standard, but it is also capable of being operated as a stand-alone network independent of the main utility grid.Keywords: Cameroon, rural electrification, hybrid solar-hydro, off-grid electricity supply, network simulation
Procedia PDF Downloads 1241602 Estimation of Missing Values in Aggregate Level Spatial Data
Authors: Amitha Puranik, V. S. Binu, Seena Biju
Abstract:
Missing data is a common problem in spatial analysis especially at the aggregate level. Missing can either occur in covariate or in response variable or in both in a given location. Many missing data techniques are available to estimate the missing data values but not all of these methods can be applied on spatial data since the data are autocorrelated. Hence there is a need to develop a method that estimates the missing values in both response variable and covariates in spatial data by taking account of the spatial autocorrelation. The present study aims to develop a model to estimate the missing data points at the aggregate level in spatial data by accounting for (a) Spatial autocorrelation of the response variable (b) Spatial autocorrelation of covariates and (c) Correlation between covariates and the response variable. Estimating the missing values of spatial data requires a model that explicitly account for the spatial autocorrelation. The proposed model not only accounts for spatial autocorrelation but also utilizes the correlation that exists between covariates, within covariates and between a response variable and covariates. The precise estimation of the missing data points in spatial data will result in an increased precision of the estimated effects of independent variables on the response variable in spatial regression analysis.Keywords: spatial regression, missing data estimation, spatial autocorrelation, simulation analysis
Procedia PDF Downloads 3821601 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 1331600 Development of a Drive Cycle Based Control Strategy for the KIIRA-EV SMACK Hybrid
Authors: Richard Madanda, Paul Isaac Musasizi, Sandy Stevens Tickodri-Togboa, Doreen Orishaba, Victor Tumwine
Abstract:
New vehicle concepts targeting specific geographical markets are designed to satisfy a unique set of road and load requirements. The KIIRA-EV SMACK (KES) hybrid vehicle is designed in Uganda for the East African market. The engine and generator added to the KES electric power train serve both as the range extender and the power assist. In this paper, the design consideration taken to achieve the proper management of the on-board power from the batteries and engine-generator based on the specific drive cycle are presented. To harness the fuel- efficiency benefits of the power train, a specific control philosophy operating the engine and generator at the most efficient speed- torque and speed-power regions is presented. By using a suitable model developed in MATLAB using Simulink and Stateflow, preliminary results show that the steady-state response of the vehicle for a particular hypothetical drive cycle mimicking the expected drive conditions in the city and highway traffic is sufficient.Keywords: control strategy, drive cycle, hybrid vehicle, simulation
Procedia PDF Downloads 3801599 Self-Tuning-Filter and Fuzzy Logic Control for Shunt Active Power Filter
Authors: Kaddari Faiza, Mazari Benyounes, Mihoub Youcef, Safa Ahmed
Abstract:
Active filtering of electric power has now become a mature technology for reactive power and harmonic compensation caused by the proliferation of power electronics devices used for industrial, commercial and residential purposes. The aim of this study is to enhance the power quality by improving the performances of shunt active power filter in harmonic mitigation to obtain sinusoidal source currents with very weak ripples. A power circuit configuration and control scheme for shunt active power filter are described with an improved method for harmonics compensation using self-tuning-filter for harmonics identification and fuzzy logic control to generate reference current. Simulation results (using MATLAB/SIMULINK) illustrates the compensation characteristics of the proposed control strategy. Analysis of these results proves the feasibility and effectiveness of this method to improve the power quality and also show the performances of fuzzy logic control which provides flexibility, high precision and fast response. The total harmonic distortion (THD %) for the simulations found to be within the recommended imposed IEEE 519-1992 harmonic standard.Keywords: Active Powers Filter (APF), Self-Tuning-Filter (STF), fuzzy logic control, hysteresis-band control
Procedia PDF Downloads 7391598 The Experiment and Simulation Analysis of the Effect of CO₂ and Steam Addition on Syngas Composition of Natural Gas Non-Catalyst Partial Oxidation
Authors: Zhenghua Dai, Jianliang Xu, Fuchen Wang
Abstract:
Non-catalyst partial oxidation technology has been widely used to produce syngas by reforming of hydrocarbon, including gas (natural gas, shale gas, refinery gas, coalbed gas, coke oven gas, pyrolysis gas, etc.) and liquid (residual oil, asphalt, deoiled asphalt, biomass oil, etc.). For natural gas non-catalyst partial oxidation, the H₂/CO(v/v) of syngas is about 1.8, which is agreed well with the request of FT synthesis. But for other process, such as carbonylation and glycol, the H₂/CO(v/v) should be close to 1 and 2 respectively. So the syngas composition of non-catalyst partial oxidation should be adjusted to satisfy the request of different chemical synthesis. That means a multi-reforming method by CO₂ and H₂O addition. The natural gas non-catalytic partial oxidation hot model was established. The effects of O₂/CH4 ratio, steam, and CO₂ on the syngas composition were studied. The results of the experiment indicate that the addition of CO₂ and steam into the reformer can be applied to change the syngas H₂/CO ratio. The reactor network model (RN model) was established according to the flow partition of industrial reformer and GRI-Mech 3.0. The RN model results agree well with the industrial data. The effects of steam, CO₂ on the syngas compositions were studied with the RN model.Keywords: non-catalyst partial oxidation, natural gas, H₂/CO, CO₂ and H₂O addition, multi-reforming method
Procedia PDF Downloads 2121597 Sulfide Removal from Liquid Using Biofilm on Packed Bed of Salak Fruit Seeds
Authors: Retno Ambarwati Sigit Lestari, Wahyudi Budi Sediawan, Sarto Sarto
Abstract:
This study focused on the removal of sulfide from liquid solution using biofilm on packed bed of salak fruit seeds. Biofilter operation of 444 hours consists of 6 phases of operation. Each phase lasted for approximately 72 hours to 82 hours and run at various inlet concentration and flow rate. The highest removal efficiency is 92.01%, at the end of phase 7 at the inlet concentration of 60 ppm and the flow rate of 30 mL min-1. Mathematic model of sulfide removal was proposed to describe the operation of biofilter. The model proposed can be applied to describe the removal of sulfide liquid using biofilter in packed bed. The simulation results the value of the parameters in process. The value of the rate maximum spesific growth is 4.15E-8 s-1, Saturation constant is 9.1E-8 g cm-3, mass transfer coefisient of liquid is 0.5 cm s-1, Henry’s constant is 0.007, and mass of microorganisms growth to mass of sulfide comsumed is 30. The value of the rate maximum spesific growth in early process is 0.00000004 s-1.Keywords: biofilm, packed bed, removal, sulfide, salak fruit seeds.
Procedia PDF Downloads 1941596 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 1581595 Modeling of Power Network by ATP-Draw for Lightning Stroke Studies
Authors: John Morales, Armando Guzman
Abstract:
Protection relay algorithms play a crucial role in Electric Power System stability, where, it is clear that lightning strokes produce the mayor percentage of faults and outages of Transmission Lines (TLs) and Distribution Feeders (DFs). In this context, it is imperative to develop novel protection relay algorithms. However, in order to get this aim, Electric Power Systems (EPS) network have to be simulated as real as possible, especially the lightning phenomena, and EPS elements that affect their behavior like direct and indirect lightning, insulator string, overhead line, soil ionization and other. However, researchers have proposed new protection relay algorithms considering common faults, which are not produced by lightning strokes, omitting these imperative phenomena for the transmission line protection relays behavior. Based on the above said, this paper presents the possibilities of using the Alternative Transient Program ATP-Draw for the modeling and simulation of some models to make lightning stroke studies, especially for protection relays, which are developed through Transient Analysis of Control Systems (TACS) and MODELS language corresponding to the ATP-Draw.Keywords: back-flashover, faults, flashover, lightning stroke, modeling of lightning, outages, protection relays
Procedia PDF Downloads 3161594 Numerical Investigation of Flow Characteristics inside the External Gear Pump Using Urea Liquid Medium
Authors: Kumaresh Selvakumar, Man Young Kim
Abstract:
In selective catalytic reduction (SCR) unit, the injection system is provided with unique dosing pump to govern the urea injection phenomenon. The urea based operating liquid from the AdBlue tank links up directly with the dosing pump unit to furnish appropriate high pressure for examining the flow characteristics inside the liquid pump. This work aims in demonstrating the importance of external gear pump to provide pertinent high pressure and respective mass flow rate for each rotation. Numerical simulations are conducted using immersed solid method technique for better understanding of unsteady flow characteristics within the pump. Parametric analyses have been carried out for the gear speed and mass flow rate to find the behavior of pressure fluctuations. In the simulation results, the outlet pressure achieves maximum magnitude with the increase in rotational speed and the fluctuations grow higher.Keywords: AdBlue tank, external gear pump, immersed solid method, selective catalytic reduction
Procedia PDF Downloads 2801593 Optimal Scheduling of Load and Operational Strategy of a Load Aggregator to Maximize Profit with PEVs
Authors: Md. Shafiullah, Ali T. Al-Awami
Abstract:
This project proposes optimal scheduling of imported power of a load aggregator with the utilization of EVs to maximize its profit. As with the increase of renewable energy resources, electricity price in competitive market becomes more uncertain and, on the other hand, with the penetration of renewable distributed generators in the distribution network the predicted load of a load aggregator also becomes uncertain in real time. Though there is uncertainties in both load and price, the use of EVs storage capacity can make the operation of load aggregator flexible. LA submits its offer to day-ahead market based on predicted loads and optimized use of its EVs to maximize its profit, as well as in real time operation it uses its energy storage capacity in such a way that it can maximize its profit. In this project, load aggregators profit maximization algorithm is formulated and the optimization problem is solved with the help of CVX. As in real time operation the forecasted loads differ from actual load, the mismatches are settled in real time balancing market. Simulation results compare the profit of a load aggregator with a hypothetical group of 1000 EVs and without EVs.Keywords: CVX, electricity market, load aggregator, load and price uncertainties, profit maximization, real time balancing operation
Procedia PDF Downloads 4161592 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting
Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi
Abstract:
An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power
Procedia PDF Downloads 4111591 Symbol Synchronization and Resource Reuse Schemes for Layered Video Multicast Service in Long Term Evolution Networks
Authors: Chung-Nan Lee, Sheng-Wei Chu, You-Chiun Wang
Abstract:
LTE (Long Term Evolution) employs the eMBMS (evolved Multimedia Broadcast/Multicast Service) protocol to deliver video streams to a multicast group of users. However, it requires all multicast members to receive a video stream in the same transmission rate, which would degrade the overall service quality when some users encounter bad channel conditions. To overcome this problem, this paper provides two efficient resource allocation schemes in such LTE network: The symbol synchronization (S2) scheme assumes that the macro and pico eNodeBs use the same frequency channel to deliver the video stream to all users. It then adopts a multicast transmission index to guarantee the fairness among users. On the other hand, the resource reuse (R2) scheme allows eNodeBs to transmit data on different frequency channels. Then, by introducing the concept of frequency reuse, it can further improve the overall service quality. Extensive simulation results show that the S2 and R2 schemes can respectively improve around 50% of fairness and 14% of video quality as compared with the common maximum throughput method.Keywords: LTE networks, multicast, resource allocation, layered video
Procedia PDF Downloads 3891590 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion
Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong
Abstract:
The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor
Procedia PDF Downloads 2321589 A Bivariate Inverse Generalized Exponential Distribution and Its Applications in Dependent Competing Risks Model
Authors: Fatemah A. Alqallaf, Debasis Kundu
Abstract:
The aim of this paper is to introduce a bivariate inverse generalized exponential distribution which has a singular component. The proposed bivariate distribution can be used when the marginals have heavy-tailed distributions, and they have non-monotone hazard functions. Due to the presence of the singular component, it can be used quite effectively when there are ties in the data. Since it has four parameters, it is a very flexible bivariate distribution, and it can be used quite effectively for analyzing various bivariate data sets. Several dependency properties and dependency measures have been obtained. The maximum likelihood estimators cannot be obtained in closed form, and it involves solving a four-dimensional optimization problem. To avoid that, we have proposed to use an EM algorithm, and it involves solving only one non-linear equation at each `E'-step. Hence, the implementation of the proposed EM algorithm is very straight forward in practice. Extensive simulation experiments and the analysis of one data set have been performed. We have observed that the proposed bivariate inverse generalized exponential distribution can be used for modeling dependent competing risks data. One data set has been analyzed to show the effectiveness of the proposed model.Keywords: Block and Basu bivariate distributions, competing risks, EM algorithm, Marshall-Olkin bivariate exponential distribution, maximum likelihood estimators
Procedia PDF Downloads 1431588 Vaccine Development for Newcastle Disease Virus in Poultry
Authors: Muhammad Asif Rasheed
Abstract:
Newcastle disease virus (NDV), an avian orthoavulavirus, is a causative agent of Newcastle disease named (NDV) and can cause even the epidemics when the disease is not treated. Previously several vaccines based on attenuated and inactivated viruses have been reported, which are rendered useless with the passage of time due to versatile changes in viral genome. Therefore, we aimed to develop an effective multi-epitope vaccine against the haemagglutinin neuraminidase (HN) protein of 26 NDV strains from Pakistan through a modern immunoinformatic approaches. As a result, a vaccine chimaera was constructed by combining T-cell and B-cell epitopes with the appropriate linkers and adjuvant. The designed vaccine was highly immunogenic, non-allergen, and antigenic; therefore, the potential 3D-structureof multi epitope vaccine was constructed, refined, and validated. A molecular docking study of a multiepitope vaccine candidate with the chicken Toll-like receptor-4 indicated successful binding. An In silico immunological simulation was used to evaluate the candidate vaccine's ability to elicit an effective immune response. According to the computational studies, the proposed multiepitope vaccine is physically stable and may induce immune responses, whichsuggested it a strong candidate against 26 Newcastle disease virus strains from Pakistan. A wet lab study is under process to confirm the results.Keywords: epitopes, newcastle disease virus, paramyxovirus virus, vaccine
Procedia PDF Downloads 1201587 Experimental and Numerical Analysis of the Effects of Ball-End Milling Process upon Residual Stresses and Cutting Forces
Authors: Belkacem Chebil Sonia, Bensalem Wacef
Abstract:
The majority of ball end milling models includes only the influence of cutting parameters (cutting speed, feed rate, depth of cut). Furthermore, this influence is studied in most of works on cutting force. Therefore, this study proposes an accurate ball end milling process modeling which includes also the influence of tool workpiece inclination. In addition, a characterization of residual stresses resulting of thermo mechanical loading in the workpiece was also presented. Moreover, the study of the influence of tool workpiece inclination and cutting parameters was made on residual stresses distribution. In order to achieve the predetermination of cutting forces and residual stresses during a milling operation, a thermo mechanical three-dimensional numerical model of ball end milling was developed. Furthermore, an experimental companion of ball end milling tests was realized on a 5-axis machining center to determine the cutting forces and characterize the residual stresses. The simulation results are compared with the experiment to validate the Finite Element Model and subsequently identify the optimum inclination angle and cutting parameters.Keywords: ball end milling, cutting forces, cutting parameters, residual stress, tool-workpiece inclination
Procedia PDF Downloads 3081586 Design and Implementation of Image Super-Resolution for Myocardial Image
Authors: M. V. Chidananda Murthy, M. Z. Kurian, H. S. Guruprasad
Abstract:
Super-resolution is the technique of intelligently upscaling images, avoiding artifacts or blurring, and deals with the recovery of a high-resolution image from one or more low-resolution images. Single-image super-resolution is a process of obtaining a high-resolution image from a set of low-resolution observations by signal processing. While super-resolution has been demonstrated to improve image quality in scaled down images in the image domain, its effects on the Fourier-based technique remains unknown. Super-resolution substantially improved the spatial resolution of the patient LGE images by sharpening the edges of the heart and the scar. This paper aims at investigating the effects of single image super-resolution on Fourier-based and image based methods of scale-up. In this paper, first, generate a training phase of the low-resolution image and high-resolution image to obtain dictionary. In the test phase, first, generate a patch and then difference of high-resolution image and interpolation image from the low-resolution image. Next simulation of the image is obtained by applying convolution method to the dictionary creation image and patch extracted the image. Finally, super-resolution image is obtained by combining the fused image and difference of high-resolution and interpolated image. Super-resolution reduces image errors and improves the image quality.Keywords: image dictionary creation, image super-resolution, LGE images, patch extraction
Procedia PDF Downloads 3751585 Effect of Tilt Angle of Herringbone Microstructures on Enhancement of Heat and Mass Transfer
Authors: Nathan Estrada, Fangjun Shu, Yanxing Wang
Abstract:
The heat and mass transfer characteristics of a simple shear flow over a surface covered with staggered herringbone structures are numerically investigated using the lattice Boltzmann method. The focus is on the effect of ridge angle of the structures on the enhancement of heat and mass transfer. In the simulation, the temperature and mass concentration are modeled as a passive scalar released from the moving top wall and absorbed at the structured bottom wall. Reynolds number is fixed at 100. Two Prandtl or Schmidt numbers, 1 and 10, are considered. The results show that the advective scalar transport plays a more important role at larger Schmidt numbers. The fluid travels downward with higher scalar concentration into the grooves at the backward grove tips and travel upward with lower scalar concentration at the forward grove tips. Different tile angles result in different flow advection in wall-normal direction and thus different heat and mass transport efficiencies. The maximum enhancement is achieved at an angle between 15o and 30o. The mechanism of heat and mass transfer is analyzed in detail.Keywords: fluid mechanics, heat and mass transfer, microfluidics, staggered herringbone mixer
Procedia PDF Downloads 1121584 A Modular Solution for Large-Scale Critical Industrial Scheduling Problems with Coupling of Other Optimization Problems
Authors: Ajit Rai, Hamza Deroui, Blandine Vacher, Khwansiri Ninpan, Arthur Aumont, Francesco Vitillo, Robert Plana
Abstract:
Large-scale critical industrial scheduling problems are based on Resource-Constrained Project Scheduling Problems (RCPSP), that necessitate integration with other optimization problems (e.g., vehicle routing, supply chain, or unique industrial ones), thus requiring practical solutions (i.e., modular, computationally efficient with feasible solutions). To the best of our knowledge, the current industrial state of the art is not addressing this holistic problem. We propose an original modular solution that answers the issues exhibited by the delivery of complex projects. With three interlinked entities (project, task, resources) having their constraints, it uses a greedy heuristic with a dynamic cost function for each task with a situational assessment at each time step. It handles large-scale data and can be easily integrated with other optimization problems, already existing industrial tools and unique constraints as required by the use case. The solution has been tested and validated by domain experts on three use cases: outage management in Nuclear Power Plants (NPPs), planning of future NPP maintenance operation, and application in the defense industry on supply chain and factory relocation. In the first use case, the solution, in addition to the resources’ availability and tasks’ logical relationships, also integrates several project-specific constraints for outage management, like, handling of resource incompatibility, updating of tasks priorities, pausing tasks in a specific circumstance, and adjusting dynamic unit of resources. With more than 20,000 tasks and multiple constraints, the solution provides a feasible schedule within 10-15 minutes on a standard computer device. This time-effective simulation corresponds with the nature of the problem and requirements of several scenarios (30-40 simulations) before finalizing the schedules. The second use case is a factory relocation project where production lines must be moved to a new site while ensuring the continuity of their production. This generates the challenge of merging job shop scheduling and the RCPSP with location constraints. Our solution allows the automation of the production tasks while considering the rate expectation. The simulation algorithm manages the use and movement of resources and products to respect a given relocation scenario. The last use case establishes a future maintenance operation in an NPP. The project contains complex and hard constraints, like on Finish-Start precedence relationship (i.e., successor tasks have to start immediately after predecessors while respecting all constraints), shareable coactivity for managing workspaces, and requirements of a specific state of "cyclic" resources (they can have multiple states possible with only one at a time) to perform tasks (can require unique combinations of several cyclic resources). Our solution satisfies the requirement of minimization of the state changes of cyclic resources coupled with the makespan minimization. It offers a solution of 80 cyclic resources with 50 incompatibilities between levels in less than a minute. Conclusively, we propose a fast and feasible modular approach to various industrial scheduling problems that were validated by domain experts and compatible with existing industrial tools. This approach can be further enhanced by the use of machine learning techniques on historically repeated tasks to gain further insights for delay risk mitigation measures.Keywords: deterministic scheduling, optimization coupling, modular scheduling, RCPSP
Procedia PDF Downloads 1991583 Diffusion Treatment of Niobium and Molybdenum on Pur Titanium and Titanium Alloy Ti-64al and Their Properties
Authors: Kaouka Alaeddine, K. Benarous
Abstract:
This study aims to obtain a surface of pure titanium and titanium alloy Ti-64Al with high performance by the diffusion process. Two agents metal alloy have been used in this treatment, niobium (Nb) and molybdenum (Mo), spread on elemental titanium and Ti-64Al alloy. Nb and Mo are used as powder form to increase the contact surface and to improve the distribution. Both Mo and Nb are distributed on samples of Ti and Ti-64Al at 1100 °C and 1200 °C for 3 h. They were performed to effect different experiments objectives. This work was achieved to improve some properties and microstructure of Ti and Ti-64Al surface, using optical microscopy and SEM and study some mechanical properties. The effects of temperature and the powder contents on the microstructure of Ti and Ti-64Al alloy, different phases and hardness value of Ti and Ti-64Al alloy were determined. Experimental results indicate that increasing the powder contents and/or the temperature, the α + β phases change to the equiaxed β lamellar structure. In particular, experiments in 1200 °C were created by diffusion α + β phases both equiaxed β phase laminar and α + β phase, thus meeting the objectives were established in the work. In addition, simulation results are used for comparison with the experimental results by DICTRA software.Keywords: diffusion, powder metallurgy, titanium alloy, molybdenum, niobium
Procedia PDF Downloads 1481582 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model
Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis
Abstract:
In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.Keywords: cause of failure, linear degradation path, reliability function, expectation-maximization algorithm, intensity, masked data
Procedia PDF Downloads 3341581 On the Network Packet Loss Tolerance of SVM Based Activity Recognition
Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir
Abstract:
In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.Keywords: activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss
Procedia PDF Downloads 4751580 Assessment of Exploitation Vulnerability of Quantum Communication Systems with Phase Encryption
Authors: Vladimir V. Nikulin, Bekmurza H. Aitchanov, Olimzhon A. Baimuratov
Abstract:
Quantum communication technology takes advantage of the intrinsic properties of laser carriers, such as very high data rates and low power requirements, to offer unprecedented data security. Quantum processes at the physical layer of encryption are used for signal encryption with very competitive performance characteristics. The ultimate range of applications for QC systems spans from fiber-based to free-space links and from secure banking operations to mobile airborne and space-borne networking where they are subjected to channel distortions. Under practical conditions, the channel can alter the optical wave front characteristics, including its phase. In addition, phase noise of the communication source and photo-detection noises alter the signal to bring additional ambiguity into the measurement process. If quantized values of photons are used to encrypt the signal, exploitation of quantum communication links becomes extremely difficult. In this paper, we present the results of analysis and simulation studies of the effects of noise on phase estimation for quantum systems with different number of encryption bases and operating at different power levels.Keywords: encryption, phase distortion, quantum communication, quantum noise
Procedia PDF Downloads 5531579 Improving the Frequency Response of a Circular Dual-Mode Resonator with a Reconfigurable Bandwidth
Authors: Muhammad Haitham Albahnassi, Adnan Malki, Shokri Almekdad
Abstract:
In this paper, a method for reconfiguring bandwidth in a circular dual-mode resonator is presented. The method concerns the optimized geometry of a structure that may be used to host the tuning elements, which are typically RF (Radio Frequency) switches. The tuning elements themselves, and their performance during tuning, are not the focus of this paper. The designed resonator is able to reconfigure its fractional bandwidth by adjusting the inter-coupling level between the degenerate modes, while at the same time improving its response by adjusting the external-coupling level and keeping the center frequency fixed. The inter-coupling level has been adjusted by changing the dimensions of the perturbation element, while the external-coupling level has been adjusted by changing one of the feeder dimensions. The design was arrived at via optimization. Agreeing simulation and measurement results of the designed and implemented filters showed good improvements in return loss values and the stability of the center frequency.Keywords: dual-mode resonators, perturbation theory, reconfigurable filters, software defined radio, cognitine radio
Procedia PDF Downloads 1671578 Mathematical Modelling of Human Cardiovascular-Respiratory System Response to Exercise in Rwanda
Authors: Jean Marie Ntaganda, Froduald Minani, Wellars Banzi, Lydie Mpinganzima, Japhet Niyobuhungiro, Jean Bosco Gahutu, Vincent Dusabejambo, Immaculate Kambutse
Abstract:
In this paper, we present a nonlinear dynamic model for the interactive mechanism of the cardiovascular and respiratory system. The model is designed and analyzed for human during physical exercises. In order to verify the adequacy of the designed model, data collected in Rwanda are used for validation. We have simulated the impact of heart rate and alveolar ventilation as controls of cardiovascular and respiratory system respectively to steady state response of the main cardiovascular hemodynamic quantities i.e., systemic arterial and venous blood pressures, arterial oxygen partial pressure and arterial carbon dioxide partial pressure, to the stabilised values of controls. We used data collected in Rwanda for both male and female during physical activities. We obtained a good agreement with physiological data in the literature. The model may represent an important tool to improve the understanding of exercise physiology.Keywords: exercise, cardiovascular/respiratory, hemodynamic quantities, numerical simulation, physical activity, sportsmen in Rwanda, system
Procedia PDF Downloads 2441577 Transport of Analytes under Mixed Electroosmotic and Pressure Driven Flow of Power Law Fluid
Authors: Naren Bag, S. Bhattacharyya, Partha P. Gopmandal
Abstract:
In this study, we have analyzed the transport of analytes under a two dimensional steady incompressible flow of power-law fluids through rectangular nanochannel. A mathematical model based on the Cauchy momentum-Nernst-Planck-Poisson equations is considered to study the combined effect of mixed electroosmotic (EO) and pressure driven (PD) flow. The coupled governing equations are solved numerically by finite volume method. We have studied extensively the effect of key parameters, e.g., flow behavior index, concentration of the electrolyte, surface potential, imposed pressure gradient and imposed electric field strength on the net average flow across the channel. In addition to study the effect of mixed EOF and PD on the analyte distribution across the channel, we consider a nonlinear model based on general convective-diffusion-electromigration equation. We have also presented the retention factor for various values of electrolyte concentration and flow behavior index.Keywords: electric double layer, finite volume method, flow behavior index, mixed electroosmotic/pressure driven flow, non-Newtonian power-law fluids, numerical simulation
Procedia PDF Downloads 3111576 Study of Mechanical Properties of Aluminium Alloys on Normal Friction Stir Welding and Underwater Friction Stir Welding for Structural Applications
Authors: Lingaraju Dumpala, Laxmi Mohan Kumar Chintada, Devadas Deepu, Pravin Kumar Yadav
Abstract:
Friction stir welding is the new-fangled and cutting-edge technique in welding applications; it is widely used in the fields of transportation, aerospace, defense, etc. For thriving significant welding joints and properties of friction stir welded components, it is essential to carry out this advanced process in a prescribed systematic procedure. At this moment, Underwater Friction Stir Welding (UFSW) Process is the field of interest to do research work. In the continuous assessment, the study of UFSW process is to comprehend problems occurred in the past and the structure through which the mechanical properties of the welded joints can be value-added and contributes to conclude results an acceptable and resourceful joint. A meticulous criticism is given on how to modify the experimental setup from NFSW to UFSW. It can discern the influence of tool materials, feeds, spindle angle, load, rotational speeds and mechanical properties. By expending the DEFORM-3D simulation software, the achieved outcomes are validated.Keywords: Underwater Friction Stir Welding(UFSW), Al alloys, mechanical properties, Normal Friction Stir Welding(NFSW)
Procedia PDF Downloads 288