Search results for: diffusion approximation
570 Accurate And Efficient Global Approximation using Adaptive Polynomial RSM for Complex Mechanical and Vehicular Performance Models
Authors: Y. Z. Wu, Z. Dong, S. K. You
Abstract:
Global approximation using metamodel for complex mathematical function or computer model over a large variable domain is often needed in sensibility analysis, computer simulation, optimal control, and global design optimization of complex, multiphysics systems. To overcome the limitations of the existing response surface (RS), surrogate or metamodel modeling methods for complex models over large variable domain, a new adaptive and regressive RS modeling method using quadratic functions and local area model improvement schemes is introduced. The method applies an iterative and Latin hypercube sampling based RS update process, divides the entire domain of design variables into multiple cells, identifies rougher cells with large modeling error, and further divides these cells along the roughest dimension direction. A small number of additional sampling points from the original, expensive model are added over the small and isolated rough cells to improve the RS model locally until the model accuracy criteria are satisfied. The method then combines local RS cells to regenerate the global RS model with satisfactory accuracy. An effective RS cells sorting algorithm is also introduced to improve the efficiency of model evaluation. Benchmark tests are presented and use of the new metamodeling method to replace complex hybrid electrical vehicle powertrain performance model in vehicle design optimization and optimal control are discussed.Keywords: Global approximation, polynomial response surface, domain decomposition, domain combination, multiphysics modeling, hybrid powertrain optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1908569 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.Keywords: Multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, Importance sampling, approximate posterior distribution, Marginal likelihood evidence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614568 Inferences on Compound Rayleigh Parameters with Progressively Type-II Censored Samples
Authors: Abdullah Y. Al-Hossain
Abstract:
This paper considers inference under progressive type II censoring with a compound Rayleigh failure time distribution. The maximum likelihood (ML), and Bayes methods are used for estimating the unknown parameters as well as some lifetime parameters, namely reliability and hazard functions. We obtained Bayes estimators using the conjugate priors for two shape and scale parameters. When the two parameters are unknown, the closed-form expressions of the Bayes estimators cannot be obtained. We use Lindley.s approximation to compute the Bayes estimates. Another Bayes estimator has been obtained based on continuous-discrete joint prior for the unknown parameters. An example with the real data is discussed to illustrate the proposed method. Finally, we made comparisons between these estimators and the maximum likelihood estimators using a Monte Carlo simulation study.
Keywords: Progressive type II censoring, compound Rayleigh failure time distribution, maximum likelihood estimation, Bayes estimation, Lindley's approximation method, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2390567 Covering-based Rough sets Based on the Refinement of Covering-element
Authors: Jianguo Tang, Kun She, William Zhu
Abstract:
Covering-based rough sets is an extension of rough sets and it is based on a covering instead of a partition of the universe. Therefore it is more powerful in describing some practical problems than rough sets. However, by extending the rough sets, covering-based rough sets can increase the roughness of each model in recognizing objects. How to obtain better approximations from the models of a covering-based rough sets is an important issue. In this paper, two concepts, determinate elements and indeterminate elements in a universe, are proposed and given precise definitions respectively. This research makes a reasonable refinement of the covering-element from a new viewpoint. And the refinement may generate better approximations of covering-based rough sets models. To prove the theory above, it is applied to eight major coveringbased rough sets models which are adapted from other literature. The result is, in all these models, the lower approximation increases effectively. Correspondingly, in all models, the upper approximation decreases with exceptions of two models in some special situations. Therefore, the roughness of recognizing objects is reduced. This research provides a new approach to the study and application of covering-based rough sets.Keywords: Determinate element, indeterminate element, refinementof covering-element, refinement of covering, covering-basedrough sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1322566 The Strategy of Creating a Virtual Interactive Platform for the Low-Carbon Open Innovations Relay
Authors: Mykola S. Shestavin
Abstract:
A strategy for the creation of a Virtual Interactive Platform (or Networking Platform) to combine the four web-baseness of expert systems on the transfer and diffusion of low-carbon technologies. It used the concept of “Open Innovation” and “Triple Helix” with regard to theories of “Green Growth” and “Carbon Footprint”. Interpreters expert systems operate on the basis of models of the “Predator-Prey” for the process of transfer and diffusion of technologies, taking into account the features caused by the need to mitigate the effects of climate change.
Keywords: Climate Change, Expert Systems, Low-Carbon Technology, Open Innovation, Virtual Interactive Platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893565 Percolation Transition with Hidden Variables in Complex Networks
Authors: Zhanli Zhang, Wei Chen, Xin Jiang, Lili Ma, Shaoting Tang, Zhiming Zheng
Abstract:
A new class of percolation model in complex networks, in which nodes are characterized by hidden variables reflecting the properties of nodes and the occupied probability of each link is determined by the hidden variables of the end nodes, is studied in this paper. By the mean field theory, the analytical expressions for the phase of percolation transition is deduced. It is determined by the distribution of the hidden variables for the nodes and the occupied probability between pairs of them. Moreover, the analytical expressions obtained are checked by means of numerical simulations on a particular model. Besides, the general model can be applied to describe and control practical diffusion models, such as disease diffusion model, scientists cooperation networks, and so on.Keywords: complex networks, percolation transition, hidden variable, occupied probability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608564 On the Early Development of Dispersion in Flow through a Tube with Wall Reactions
Abstract:
This is a study on numerical simulation of the convection-diffusion transport of a chemical species in steady flow through a small-diameter tube, which is lined with a very thin layer made up of retentive and absorptive materials. The species may be subject to a first-order kinetic reversible phase exchange with the wall material and irreversible absorption into the tube wall. Owing to the velocity shear across the tube section, the chemical species may spread out axially along the tube at a rate much larger than that given by the molecular diffusion; this process is known as dispersion. While the long-time dispersion behavior, well described by the Taylor model, has been extensively studied in the literature, the early development of the dispersion process is by contrast much less investigated. By early development, that means a span of time, after the release of the chemical into the flow, that is shorter than or comparable to the diffusion time scale across the tube section. To understand the early development of the dispersion, the governing equations along with the reactive boundary conditions are solved numerically using the Flux Corrected Transport Algorithm (FCTA). The computation has enabled us to investigate the combined effects on the early development of the dispersion coefficient due to the reversible and irreversible wall reactions. One of the results is shown that the dispersion coefficient may approach its steady-state limit in a short time under the following conditions: (i) a high value of Damkohler number (say Da ≥ 10); (ii) a small but non-zero value of absorption rate (say Γ* ≤ 0.5).
Keywords: Dispersion coefficient, early development of dispersion, FCTA, wall reactions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1339563 BEM Formulations Based on Kirchhoffs Hypoyhesis to Perform Linear Bending Analysis of Plates Reinforced by Beams
Authors: Gabriela R. Fernandes, Renato F. Denadai, Guido J. Denipotti
Abstract:
In this work, are discussed two formulations of the boundary element method - BEM to perform linear bending analysis of plates reinforced by beams. Both formulations are based on the Kirchhoff's hypothesis and they are obtained from the reciprocity theorem applied to zoned plates, where each sub-region defines a beam or a slab. In the first model the problem values are defined along the interfaces and the external boundary. Then, in order to reduce the number of degrees of freedom kinematics hypothesis are assumed along the beam cross section, leading to a second formulation where the collocation points are defined along the beam skeleton, instead of being placed on interfaces. On these formulations no approximation of the generalized forces along the interface is required. Moreover, compatibility and equilibrium conditions along the interface are automatically imposed by the integral equation. Thus, these formulations require less approximation and the total number of the degree s of freedom is reduced. In the numerical examples are discussed the differences between these two BEM formulations, comparing as well the results to a well-known finite element code.
Keywords: Boundary elements, Building floor structures, Platebending.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666562 A New Application of Stochastic Transformation
Authors: Nilar Win Kyaw
Abstract:
In cryptography, confusion and diffusion are very important to get confidentiality and privacy of message in block ciphers and stream ciphers. There are two types of network to provide confusion and diffusion properties of message in block ciphers. They are Substitution- Permutation network (S-P network), and Feistel network. NLFS (Non-Linear feedback stream cipher) is a fast and secure stream cipher for software application. NLFS have two modes basic mode that is synchronous mode and self synchronous mode. Real random numbers are non-deterministic. R-box (random box) based on the dynamic properties and it performs the stochastic transformation of data that can be used effectively meet the challenges of information is protected from international destructive impacts. In this paper, a new implementation of stochastic transformation will be proposed.Keywords: S-P network, Feistel network, R-block, stochastic transformation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513561 Investigation of Water Transport Dynamics in Polymer Electrolyte Membrane Fuel Cells Based on a Gas Diffusion Media Layers
Authors: Saad S. Alrwashdeh, Henning Markötter, Handri Ammari, Jan Haußmann, Tobias Arlt, Joachim Scholta, Ingo Manke
Abstract:
In this investigation, synchrotron X-ray imaging is used to study water transport inside polymer electrolyte membrane fuel cells. Two measurement techniques are used, namely in-situ radiography and quasi-in-situ tomography combining together in order to reveal the relationship between the structures of the microporous layers (MPLs) and the gas diffusion layers (GDLs), the operation temperature and the water flow. The developed cell is equipped with a thick GDL and a high back pressure MPL. It is found that these modifications strongly influence the overall water transport in the whole adjacent GDM.Keywords: Polymer electrolyte membrane fuel cell, microporous layer, water transport, radiography, tomography.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766560 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.Keywords: Data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks, WSN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1220559 The Self-Energy of an Ellectron Bound in a Coulomb Field
Authors: J. Zamastil, V. Patkos
Abstract:
Recent progress in calculation of the one-loop selfenergy of the electron bound in the Coulomb field is summarized. The relativistic multipole expansion is introduced. This expansion is based on a single assumption: except for the part of the time component of the electron four-momentum corresponding to the electron rest mass, the exchange of four-momentum between the virtual electron and photon can be treated perturbatively. For non Sstates and normalized difference n3En −E1 of the S-states this itself yields very accurate results after taking the method to the third order. For the ground state the perturbation treatment of the electron virtual states with very high three-momentum is to be avoided. For these states one can always rearrange the pertinent expression in such a way that free-particle approximation is allowed. Combination of the relativistic multipole expansion and free-particle approximation yields very accurate result after taking the method to the ninth order. These results are in very good agreement with the previous results obtained by the partial wave expansion and definitely exclude the possibility that the uncertainity in determination of the proton radius comes from the uncertainity in the calculation of the one-loop selfenergy.
Keywords: Hydrogen-like atoms, self-energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680558 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion
Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan
Abstract:
In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.
Keywords: Accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 894557 Cooperative Sensing for Wireless Sensor Networks
Authors: Julien Romieux, Fabio Verdicchio
Abstract:
Wireless Sensor Networks (WSNs), which sense environmental data with battery-powered nodes, require multi-hop communication. This power-demanding task adds an extra workload that is unfairly distributed across the network. As a result, nodes run out of battery at different times: this requires an impractical individual node maintenance scheme. Therefore we investigate a new Cooperative Sensing approach that extends the WSN operational life and allows a more practical network maintenance scheme (where all nodes deplete their batteries almost at the same time). We propose a novel cooperative algorithm that derives a piecewise representation of the sensed signal while controlling approximation accuracy. Simulations show that our algorithm increases WSN operational life and spreads communication workload evenly. Results convey a counterintuitive conclusion: distributing workload fairly amongst nodes may not decrease the network power consumption and yet extend the WSN operational life. This is achieved as our cooperative approach decreases the workload of the most burdened cluster in the network.Keywords: Cooperative signal processing, power management, signal representation, signal approximation, wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1786556 Microbial Leaching Process to Recover Valuable Metals from Spent Petroleum Catalyst Using Iron Oxidizing Bacteria
Authors: Debabrata Pradhan, Dong J. Kim, Jong G. Ahn, Seoung W. Lee
Abstract:
Spent petroleum catalyst from Korean petrochemical industry contains trace amount of metals such as Ni, V and Mo. Therefore an attempt was made to recover those trace metal using bioleaching process. Different leaching parameters such as Fe(II) concentration, pulp density, pH, temperature and particle size of spent catalyst particle were studied to evaluate their effects on the leaching efficiency. All the three metal ions like Ni, V and Mo followed dual kinetics, i.e., initial faster followed by slower rate. The percentage of leaching efficiency of Ni and V were higher than Mo. The leaching process followed a diffusion controlled model and the product layer was observed to be impervious due to formation of ammonium jarosite (NH4)Fe3(SO4)2(OH)6. In addition, the lower leaching efficiency of Mo was observed due to a hydrophobic coating of elemental sulfur over Mo matrix in the spent catalyst.Keywords: Bioleaching, diffusion control, shrinking core, spentpetroleum catalyst.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020555 River Flow Prediction Using Nonlinear Prediction Method
Authors: N. H. Adenan, M. S. M. Noorani
Abstract:
River flow prediction is an essential to ensure proper management of water resources can be optimally distribute water to consumers. This study presents an analysis and prediction by using nonlinear prediction method involving monthly river flow data in Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The phase space reconstruction involves the reconstruction of one-dimensional (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. Revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) have been employed to compare prediction performance for nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show the prediction results using nonlinear prediction method is better than ARIMA and SVM. Therefore, the result of this study could be used to develop an efficient water management system to optimize the allocation water resources.
Keywords: River flow, nonlinear prediction method, phase space, local linear approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2365554 Magnetic End Leakage Flux in a Spoke Type Rotor Permanent Magnet Synchronous Generator
Authors: Petter Eklund, Jonathan Sjölund, Sandra Eriksson, Mats Leijon
Abstract:
The spoke type rotor can be used to obtain magnetic flux concentration in permanent magnet machines. This allows the air gap magnetic flux density to exceed the remanent flux density of the permanent magnets but gives problems with leakage fluxes in the magnetic circuit. The end leakage flux of one spoke type permanent magnet rotor design is studied through measurements and finite element simulations. The measurements are performed in the end regions of a 12 kW prototype generator for a vertical axis wind turbine. The simulations are made using three dimensional finite elements to calculate the magnetic field distribution in the end regions of the machine. Also two dimensional finite element simulations are performed and the impact of the two dimensional approximation is studied. It is found that the magnetic leakage flux in the end regions of the machine is equal to about 20% of the flux in the permanent magnets. The overestimation of the performance by the two dimensional approximation is quantified and a curve-fitted expression for its behavior is suggested.Keywords: End effects, end leakage flux, permanent magnet machine, spoke type rotor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073553 Analysis of One Dimensional Advection Diffusion Model Using Finite Difference Method
Authors: Vijay Kumar Kukreja, Ravneet Kaur
Abstract:
In this paper, one dimensional advection diffusion model is analyzed using finite difference method based on Crank-Nicolson scheme. A practical problem of filter cake washing of chemical engineering is analyzed. The model is converted into dimensionless form. For the grid Ω × ω = [0, 1] × [0, T], the Crank-Nicolson spatial derivative scheme is used in space domain and forward difference scheme is used in time domain. The scheme is found to be unconditionally convergent, stable, first order accurate in time and second order accurate in space domain. For a test problem, numerical results are compared with the analytical ones for different values of parameter.Keywords: Consistency, Crank-Nicolson scheme, Gerschgorin circle, Lax-Richtmyer theorem, Peclet number, stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 761552 A Survey on Usage and Diffusion of Project Risk Management Techniques and Software Tools in the Construction Industry
Authors: Muhammad Jamaluddin Thaheem, Alberto De Marco
Abstract:
The area of Project Risk Management (PRM) has been extensively researched, and the utilization of various tools and techniques for managing risk in several industries has been sufficiently reported. Formal and systematic PRM practices have been made available for the construction industry. Based on such body of knowledge, this paper tries to find out the global picture of PRM practices and approaches with the help of a survey to look into the usage of PRM techniques and diffusion of software tools, their level of maturity, and their usefulness in the construction sector. Results show that, despite existing techniques and tools, their usage is limited: software tools are used only by a minority of respondents and their cost is one of the largest hurdles in adoption. Finally, the paper provides some important guidelines for future research regarding quantitative risk analysis techniques and suggestions for PRM software tools development and improvement.Keywords: Construction industry, Project risk management, Software tools, Survey study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2974551 Removal of Malachite Green from Aqueous Solution using Hydrilla verticillata -Optimization, Equilibrium and Kinetic Studies
Authors: R. Rajeshkannan, M. Rajasimman, N. Rajamohan
Abstract:
In this study, the sorption of Malachite green (MG) on Hydrilla verticillata biomass, a submerged aquatic plant, was investigated in a batch system. The effects of operating parameters such as temperature, adsorbent dosage, contact time, adsorbent size, and agitation speed on the sorption of Malachite green were analyzed using response surface methodology (RSM). The proposed quadratic model for central composite design (CCD) fitted very well to the experimental data that it could be used to navigate the design space according to ANOVA results. The optimum sorption conditions were determined as temperature - 43.5oC, adsorbent dosage - 0.26g, contact time - 200min, adsorbent size - 0.205mm (65mesh), and agitation speed - 230rpm. The Langmuir and Freundlich isotherm models were applied to the equilibrium data. The maximum monolayer coverage capacity of Hydrilla verticillata biomass for MG was found to be 91.97 mg/g at an initial pH 8.0 indicating that the optimum sorption initial pH. The external and intra particle diffusion models were also applied to sorption data of Hydrilla verticillata biomass with MG, and it was found that both the external diffusion as well as intra particle diffusion contributes to the actual sorption process. The pseudo-second order kinetic model described the MG sorption process with a good fitting.
Keywords: Response surface methodology, Hydrilla verticillata, malachite green, adsorption, central composite design
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990550 Numerical Simulation of Bio-Chemical Diffusion in Bone Scaffolds
Authors: Masoud Madadelahi, Amir Shamloo, Seyedeh Sara Salehi
Abstract:
Previously, some materials like solid metals and their alloys have been used as implants in human’s body. In order to amend fixation of these artificial hard human tissues, some porous structures have been introduced. In this way, tissues in vicinity of the porous structure can be attached more easily to the inserted implant. In particular, the porous bone scaffolds are useful since they can deliver important biomolecules like growth factors and proteins. This study focuses on the properties of the degradable porous hard tissues using a three-dimensional numerical Finite Element Method (FEM). The most important studied properties of these structures are diffusivity flux and concentration of different species like glucose, oxygen, and lactate. The process of cells migration into the scaffold is considered as a diffusion process, and related parameters are studied for different values of production/consumption rates.Keywords: Bone scaffolds, diffusivity, numerical simulation, tissue engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780549 Formation of Protective Silicide-Aluminide Coating on Gamma-TiAl Advanced Material
Authors: S. Nouri
Abstract:
In this study, the Si-aluminide coating was prepared on gamma-TiAl [Ti-45Al-2Nb-2Mn-1B (at. %)] via liquid-phase slurry procedure. The high temperature oxidation resistance of this diffusion coating was evaluated at 1100 °C for 400 hours. The results of the isothermal oxidation showed that the formation of Si-aluminide coating can remarkably improve the high temperature oxidation of bare gamma-TiAl alloy. The identification of oxide scale microstructure showed that the formation of protective Al2O3+SiO2 mixed oxide scale along with a continuous, compact and uniform layer of Ti5Si3 beneath the surface oxide scale can act as an oxygen diffusion barrier during the high temperature oxidation. The other possible mechanisms related to the formation of Si-aluminide coating and oxide scales were also discussed.
Keywords: Gamma-TiAl alloy, Si-aluminide coating, slurry procedure, high temperature oxidation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 671548 A Monte Carlo Method to Data Stream Analysis
Authors: Kittisak Kerdprasop, Nittaya Kerdprasop, Pairote Sattayatham
Abstract:
Data stream analysis is the process of computing various summaries and derived values from large amounts of data which are continuously generated at a rapid rate. The nature of a stream does not allow a revisit on each data element. Furthermore, data processing must be fast to produce timely analysis results. These requirements impose constraints on the design of the algorithms to balance correctness against timely responses. Several techniques have been proposed over the past few years to address these challenges. These techniques can be categorized as either dataoriented or task-oriented. The data-oriented approach analyzes a subset of data or a smaller transformed representation, whereas taskoriented scheme solves the problem directly via approximation techniques. We propose a hybrid approach to tackle the data stream analysis problem. The data stream has been both statistically transformed to a smaller size and computationally approximated its characteristics. We adopt a Monte Carlo method in the approximation step. The data reduction has been performed horizontally and vertically through our EMR sampling method. The proposed method is analyzed by a series of experiments. We apply our algorithm on clustering and classification tasks to evaluate the utility of our approach.Keywords: Data Stream, Monte Carlo, Sampling, DensityEstimation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417547 Monthly River Flow Prediction Using a Nonlinear Prediction Method
Authors: N. H. Adenan, M. S. M. Noorani
Abstract:
River flow prediction is an essential tool to ensure proper management of water resources and the optimal distribution of water to consumers. This study presents an analysis and prediction by using nonlinear prediction method with monthly river flow data for Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The reconstruction of phase space involves the reconstruction of one-dimension (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. The revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) was employed to compare prediction performance for the nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show that the prediction results using the nonlinear prediction method are better than ARIMA and SVM. Therefore, the results of this study could be used to develop an efficient water management system to optimize the allocation of water resources.
Keywords: River flow, nonlinear prediction method, phase space, local linear approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1962546 Evaluation of Sensor Pattern Noise Estimators for Source Camera Identification
Authors: Benjamin Anderson-Sackaney, Amr Abdel-Dayem
Abstract:
This paper presents a comprehensive survey of recent source camera identification (SCI) systems. Then, the performance of various sensor pattern noise (SPN) estimators was experimentally assessed, under common photo response non-uniformity (PRNU) frameworks. The experiments used 1350 natural and 900 flat-field images, captured by 18 individual cameras. 12 different experiments, grouped into three sets, were conducted. The results were analyzed using the receiver operator characteristic (ROC) curves. The experimental results demonstrated that combining the basic SPN estimator with a wavelet-based filtering scheme provides promising results. However, the phase SPN estimator fits better with both patch-based (BM3D) and anisotropic diffusion (AD) filtering schemes.Keywords: Sensor pattern noise, source camera identification, photo response non-uniformity, anisotropic diffusion, peak to correlation energy ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1136545 Rejuvenate: Face and Body Retouching Using Image Inpainting
Authors: H. AbdelRahman, S. Rostom, Y. Lotfy, S. Salah Eldeen, R. Yassein, N. Awny
Abstract:
People are growing more concerned with their appearance in today's society. But they are terrified of what they will look like after a plastic surgery. People's mental health suffers when they have accidents, burns, or genetic issues that cause them to cleave certain body parts, which makes them feel uncomfortable and unappreciated. The method provides an innovative deep learning-based technique for image inpainting that analyzes different picture structures and fixes damaged images. This study proposes a model based on the Stable Diffusion Inpainting method for in-painting medical images. One significant advancement made possible by deep neural networks is image inpainting, which is the process of reconstructing damaged and missing portions of an image. The patient can see the outcome more easily since the system uses the user's input of an image to identify a problem. It then modifies the image and outputs a fixed image.
Keywords: Generative Adversarial Network, GAN, Large Mask Inpainting, LAMA, Stable Diffusion Inpainting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 107544 The Origin, Diffusion and a Comparison of Ordinary Differential Equations Numerical Solutions Used by SIR Model in Order to Predict SARS-CoV-2 in Nordic Countries
Authors: Gleda Kutrolli, Maksi Kutrolli, Etjon Meco
Abstract:
SARS-CoV-2 virus is currently one of the most infectious pathogens for humans. It started in China at the end of 2019 and now it is spread in all over the world. The origin and diffusion of the SARS-CoV-2 epidemic, is analysed based on the discussion of viral phylogeny theory. With the aim of understanding the spread of infection in the affected countries, it is crucial to modelize the spread of the virus and simulate its activity. In this paper, the prediction of coronavirus outbreak is done by using SIR model without vital dynamics, applying different numerical technique solving ordinary differential equations (ODEs). We find out that ABM and MRT methods perform better than other techniques and that the activity of the virus will decrease in April but it never cease (for some time the activity will remain low) and the next cycle will start in the middle July 2020 for Norway and Denmark, and October 2020 for Sweden, and September for Finland.Keywords: Forecasting, ordinary differential equations, SARS-CoV-2 epidemic, SIR model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 558543 Blind Image Deconvolution by Neural Recursive Function Approximation
Authors: Jiann-Ming Wu, Hsiao-Chang Chen, Chun-Chang Wu, Pei-Hsun Hsu
Abstract:
This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Keywords: Blind image deconvolution, linear shift-invariant(LSI), linear image degradation model, radial basis functions (rbf), recursive function, annealed Hopfield neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2061542 On the Efficiency and Robustness of Commingle Wiener and Lévy Driven Processes for Vasciek Model
Authors: Rasaki O. Olanrewaju
Abstract:
The driven processes of Wiener and Lévy are known self-standing Gaussian-Markov processes for fitting non-linear dynamical Vasciek model. In this paper, a coincidental Gaussian density stationarity condition and autocorrelation function of the two driven processes were established. This led to the conflation of Wiener and Lévy processes so as to investigate the efficiency of estimates incorporated into the one-dimensional Vasciek model that was estimated via the Maximum Likelihood (ML) technique. The conditional laws of drift, diffusion and stationarity process was ascertained for the individual Wiener and Lévy processes as well as the commingle of the two processes for a fixed effect and Autoregressive like Vasciek model when subjected to financial series; exchange rate of Naira-CFA Franc. In addition, the model performance error of the sub-merged driven process was miniature compared to the self-standing driven process of Wiener and Lévy.Keywords: Wiener process, Lévy process, Vasciek model, drift, diffusion, Gaussian density stationary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 666541 Automated Optic Disc Detection in Retinal Images of Patients with Diabetic Retinopathy and Risk of Macular Edema
Authors: Arturo Aquino, Manuel Emilio Gegundez, Diego Marin
Abstract:
In this paper, a new automated methodology to detect the optic disc (OD) automatically in retinal images from patients with risk of being affected by Diabetic Retinopathy (DR) and Macular Edema (ME) is presented. The detection procedure comprises two independent methodologies. On one hand, a location methodology obtains a pixel that belongs to the OD using image contrast analysis and structure filtering techniques and, on the other hand, a boundary segmentation methodology estimates a circular approximation of the OD boundary by applying mathematical morphology, edge detection techniques and the Circular Hough Transform. The methodologies were tested on a set of 1200 images composed of 229 retinographies from patients affected by DR with risk of ME, 431 with DR and no risk of ME and 540 images of healthy retinas. The location methodology obtained 98.83% success rate, whereas the OD boundary segmentation methodology obtained good circular OD boundary approximation in 94.58% of cases. The average computational time measured over the total set was 1.67 seconds for OD location and 5.78 seconds for OD boundary segmentation.
Keywords: Diabetic retinopathy, macular edema, optic disc, automated detection, automated segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2790