Search results for: convergence and smoothness
283 Thermal Analysis for Darcy Forchheimer Effect with Hybrid Ferro Fluid Flow
Authors: Behzad Ali Khan, M. Zubair Akbar Qureshi
Abstract:
The article analyzes the Darcy Forchheimer 2D Hybrid ferrofluid. The flow of a Hybrid ferrofluid is made due to an unsteady porous channel. The classical liquid water is treated as a based liquid. The flow in the permeable region is characterized by the Darcy-Forchheimer relation. Heat transfer phenomena are studied during the flow. The transformation of a partial differential set of equations into a strong ordinary differential frame is formed through appropriate variables. The numerical Shooting Method is executed for solving the simplified set of equations. In addition, a numerical analysis (ND-Solve) is utilized for the convergence of the applied technique. The influence of some flow model quantities like Pr (Prandtle number), r (porous medium parameter), F (Darcy-porous medium parameter), Re (Reynolds number), Pe (Peclet number) on velocity and temperature field are scrutinized and studied through sketches. Certain physical factors like f ''(η) (skin friction coefficient) and θ^'(η) (rate of heat transfer) are first derived and then presented through tables.Keywords: darcy forcheimer, hybrid ferro fluid, porous medium, porous channel
Procedia PDF Downloads 174282 Shotcrete Performance Optimisation and Audit Using 3D Laser Scanning
Authors: Carlos Gonzalez, Neil Slatcher, Marcus Properzi, Kan Seah
Abstract:
In many underground mining operations, shotcrete is used for permanent rock support. Shotcrete thickness is a critical measure of the success of this process. 3D Laser Mapping, in conjunction with Jetcrete, has developed a 3D laser scanning system specifically for measuring the thickness of shotcrete. The system is mounted on the shotcrete spraying machine and measures the rock faces before and after spraying. The calculated difference between the two 3D surface models is measured as the thickness of the sprayed concrete. Typical work patterns for the shotcrete process required a rapid and automatic system. The scanning takes place immediately before and after the application of the shotcrete so no convergence takes place in the interval between scans. Automatic alignment of scans without targets was implemented which allows for the possibility of movement of the spraying machine between scans. Case studies are presented where accuracy tests are undertaken and automatic audit reports are calculated. The use of 3D imaging data for the calculation of shotcrete thickness is an important tool for geotechnical engineers and contract managers, and this could become the new state-of-the-art methodology for the mining industry.Keywords: 3D imaging, shotcrete, surface model, tunnel stability
Procedia PDF Downloads 290281 Median-Based Nonparametric Estimation of Returns in Mean-Downside Risk Portfolio Frontier
Authors: H. Ben Salah, A. Gannoun, C. de Peretti, A. Trabelsi
Abstract:
The Downside Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical mean-variance model concerning the asymetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developped a new recurcive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents an appearance which is not very smooth. In order to overcome that, Athayde (2003) proposed a mean kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which we had continuous observations. In this paper, taking advantage on the the robustness of the median, we replace the mean estimator in Athayde's model by a nonparametric median estimator of the returns. Then, we give a new version of the former algorithm (of Athayde (2001, 2003)). We eventually analyse the properties of this improved portfolio frontier and apply this new method on real examples.Keywords: Downside Risk, Kernel Method, Median, Nonparametric Estimation, Semivariance
Procedia PDF Downloads 492280 Application and Assessment of Artificial Neural Networks for Biodiesel Iodine Value Prediction
Authors: Raquel M. De sousa, Sofiane Labidi, Allan Kardec D. Barros, Alex O. Barradas Filho, Aldalea L. B. Marques
Abstract:
Several parameters are established in order to measure biodiesel quality. One of them is the iodine value, which is an important parameter that measures the total unsaturation within a mixture of fatty acids. Limitation of unsaturated fatty acids is necessary since warming of a higher quantity of these ones ends in either formation of deposits inside the motor or damage of lubricant. Determination of iodine value by official procedure tends to be very laborious, with high costs and toxicity of the reagents, this study uses an artificial neural network (ANN) in order to predict the iodine value property as an alternative to these problems. The methodology of development of networks used 13 esters of fatty acids in the input with convergence algorithms of backpropagation type were optimized in order to get an architecture of prediction of iodine value. This study allowed us to demonstrate the neural networks’ ability to learn the correlation between biodiesel quality properties, in this case iodine value, and the molecular structures that make it up. The model developed in the study reached a correlation coefficient (R) of 0.99 for both network validation and network simulation, with Levenberg-Maquardt algorithm.Keywords: artificial neural networks, biodiesel, iodine value, prediction
Procedia PDF Downloads 606279 An Application of Sinc Function to Approximate Quadrature Integrals in Generalized Linear Mixed Models
Authors: Altaf H. Khan, Frank Stenger, Mohammed A. Hussein, Reaz A. Chaudhuri, Sameera Asif
Abstract:
This paper discusses a novel approach to approximate quadrature integrals that arise in the estimation of likelihood parameters for the generalized linear mixed models (GLMM) as well as Bayesian methodology also requires computation of multidimensional integrals with respect to the posterior distributions in which computation are not only tedious and cumbersome rather in some situations impossible to find solutions because of singularities, irregular domains, etc. An attempt has been made in this work to apply Sinc function based quadrature rules to approximate intractable integrals, as there are several advantages of using Sinc based methods, for example: order of convergence is exponential, works very well in the neighborhood of singularities, in general quite stable and provide high accurate and double precisions estimates. The Sinc function based approach seems to be utilized first time in statistical domain to our knowledge, and it's viability and future scopes have been discussed to apply in the estimation of parameters for GLMM models as well as some other statistical areas.Keywords: generalized linear mixed model, likelihood parameters, qudarature, Sinc function
Procedia PDF Downloads 395278 A Study on Mesh Size Dependency on Bed Expansion Zone in a Three-Phase Fluidized Bed Reactor
Authors: Liliana Patricia Olivo Arias
Abstract:
The present study focused on the hydrodynamic study in a three-phase fluidized bed reactor and the influence of important aspects, such as volume fractions (Hold up), velocity magnitude of gas, liquid and solid phases (hydrogen, gasoil, and gamma alumina), interactions of phases, through of drag models with the k-epsilon turbulence model. For this purpose was employed a Euler-Euler model and also considers the system is constituted of three phases, gaseous, liquid and solid, characterized by its physical and thermal properties, the transport processes that are developed within the transient regime. The proposed model of the three-phase fluidized bed reactor was solved numerically using the ANSYS-Fluent software with different mesh refinements on bed expansion zone in order to observe the influence of the hydrodynamic parameters and convergence criteria. With this model and the numerical simulations obtained for its resolution, it was possible to predict the results of the volume fractions (Hold ups) and the velocity magnitude for an unsteady system from the initial and boundaries conditions were established.Keywords: three-phase fluidized bed system, CFD simulation, mesh dependency study, hydrodynamic study
Procedia PDF Downloads 166277 Large Eddy Simulation of Particle Clouds Using Open-Source CFD
Authors: Ruo-Qian Wang
Abstract:
Open-source CFD has become increasingly popular and promising. The recent progress in multiphase flow enables new CFD applications, which provides an economic and flexible research tool for complex flow problems. Our numerical study using four-way coupling Euler-Lagrangian Large-Eddy Simulations to resolve particle cloud dynamics with OpenFOAM and CFDEM will be introduced: The fractioned Navier-Stokes equations are numerically solved for fluid phase motion, solid phase motion is addressed by Lagrangian tracking for every single particle, and total momentum is conserved by fluid-solid inter-phase coupling. The grid convergence test was performed, which proves the current resolution of the mesh is appropriate. Then, we validated the code by comparing numerical results with experiments in terms of particle cloud settlement and growth. A good comparison was obtained showing reliability of the present numerical schemes. The time and height at phase separations were defined and analyzed for a variety of initial release conditions. Empirical formulas were drawn to fit the results.Keywords: four-way coupling, dredging, land reclamation, multiphase flows, oil spill
Procedia PDF Downloads 429276 Polynomial Chaos Expansion Combined with Exponential Spline for Singularly Perturbed Boundary Value Problems with Random Parameter
Authors: W. K. Zahra, M. A. El-Beltagy, R. R. Elkhadrawy
Abstract:
So many practical problems in science and technology developed over the past decays. For instance, the mathematical boundary layer theory or the approximation of solution for different problems described by differential equations. When such problems consider large or small parameters, they become increasingly complex and therefore require the use of asymptotic methods. In this work, we consider the singularly perturbed boundary value problems which contain very small parameters. Moreover, we will consider these perturbation parameters as random variables. We propose a numerical method to solve this kind of problems. The proposed method is based on an exponential spline, Shishkin mesh discretization, and polynomial chaos expansion. The polynomial chaos expansion is used to handle the randomness exist in the perturbation parameter. Furthermore, the Monte Carlo Simulations (MCS) are used to validate the solution and the accuracy of the proposed method. Numerical results are provided to show the applicability and efficiency of the proposed method, which maintains a very remarkable high accuracy and it is ε-uniform convergence of almost second order.Keywords: singular perturbation problem, polynomial chaos expansion, Shishkin mesh, two small parameters, exponential spline
Procedia PDF Downloads 160275 Worst-Case Load Shedding in Electric Power Networks
Authors: Fu Lin
Abstract:
We consider the worst-case load-shedding problem in electric power networks where a number of transmission lines are to be taken out of service. The objective is to identify a prespecified number of line outages that lead to the maximum interruption of power generation and load at the transmission level, subject to the active power-flow model, the load and generation capacity of the buses, and the phase-angle limit across the transmission lines. For this nonlinear model with binary constraints, we show that all decision variables are separable except for the nonlinear power-flow equations. We develop an iterative decomposition algorithm, which converts the worst-case load shedding problem into a sequence of small subproblems. We show that the subproblems are either convex problems that can be solved efficiently or nonconvex problems that have closed-form solutions. Consequently, our approach is scalable for large networks. Furthermore, we prove the convergence of our algorithm to a critical point, and the objective value is guaranteed to decrease throughout the iterations. Numerical experiments with IEEE test cases demonstrate the effectiveness of the developed approach.Keywords: load shedding, power system, proximal alternating linearization method, vulnerability analysis
Procedia PDF Downloads 140274 Eros and Postmodern Nihilism in Don Delillo’s Zero K (2016): A Psychoanalytical Reading
Authors: Nouioua Wafa
Abstract:
It is broadly accepted that the existence of postmodern individuals is distinguished by a predominant presence of skepticism, anxiety and loneliness. This social unrest is the consequence of a drastic shift in how reality and meaning are conceived, which has been replaced by something that is referred to in media theory and criticism as hyperreality. The purpose of this paper is to investigate the hyperreality that exists in the postmodern nihilistic American community that Don Delillo depicts in Zero K (2016) through the use of Jean Baudrillard's notions of Simulacra and Simulations. It is a troubled technological late capitalist society obsessed with immortality and fear of demise, and ergo it is an appropriate reading to implement Sigmund Freud’s theory of life drive (Eros), which refers to the life instinct fundamental to all humans and the urge to support productivity and construction. The results obtained from a qualitative analysis of Zero K indicate the presence of a clash between the character’s life drive and fear of mortality. In an effort to escape loneliness and death, the character Ross Lockhart undergoes, after a moment of hesitation, cryonic freezing in the convergence to preserve his life as well as that of his wife Artis, yet his son Jeffery is firmly convinced of the uselessness of combating the inevitable death.Keywords: Don DeLillo, Eros, postmodernism Nihilism, Zero K
Procedia PDF Downloads 82273 Eros and Postmodern Nihilism in Don Delillo’s Zero K (2016): A Psychoanalytical Reading
Authors: Wafa Nouioua
Abstract:
It is broadly accepted that the existence of postmodern individuals is distinguished by a predominant presence of skepticism, anxiety and loneliness. This social unrest is the consequence of a drastic shift in how reality and meaning are conceived, which has been replaced by something that is referred to in media theory and criticism as hyperreality. The purpose of this paper is to investigate the hyperreality that exists in the postmodern nihilistic American community that Don Delillo depicts in Zero K (2016) through the use of Jean Baudrillard notions of Simulacra and Simulations. It is a troubled technological late capitalist society obsessed with immortality and fear of demise, ergo it is an appropriate reading to implement Sigmund Freud’s theory of life drive (Eros), which refers to the life instinct fundamental to all humans and the urge to support productivity and construction. The results obtained from a qualitative analysis of Zero K indicate the presence of a clash between the character’s life drive and fear of mortality. In an effort to escape loneliness and death, the character Ross Lockhart undergoes, after a moment of hesitation, cryonic freezing in the convergence to preserve his life as well as that of his wife Artis, yet his son Jeffery is firmly convinced of the uselessness of combating the inevitable death.Keywords: Don Dellilo, Eros, Postmodernism Nihilism, Zero K
Procedia PDF Downloads 74272 Iterative Solver for Solving Large-Scale Frictional Contact Problems
Authors: Thierno Diop, Michel Fortin, Jean Deteix
Abstract:
Since the precise formulation of the elastic part is irrelevant for the description of the algorithm, we shall consider a generic case. In practice, however, we will have to deal with a non linear material (for instance a Mooney-Rivlin model). We are interested in solving a finite element approximation of the problem, leading to large-scale non linear discrete problems and, after linearization, to large linear systems and ultimately to calculations needing iterative methods. This also implies that penalty method, and therefore augmented Lagrangian method, are to be banned because of their negative effect on the condition number of the underlying discrete systems and thus on the convergence of iterative methods. This is in rupture to the mainstream of methods for contact in which augmented Lagrangian is the principal tool. We shall first present the problem and its discretization; this will lead us to describe a general solution algorithm relying on a preconditioner for saddle-point problems which we shall describe in some detail as it is not entirely standard. We will propose an iterative approach for solving three-dimensional frictional contact problems between elastic bodies, including contact with a rigid body, contact between two or more bodies and also self-contact.Keywords: frictional contact, three-dimensional, large-scale, iterative method
Procedia PDF Downloads 211271 Using Personalized Spiking Neural Networks, Distinct Techniques for Self-Governing
Authors: Brwa Abdulrahman Abubaker
Abstract:
Recently, there has been a lot of interest in the difficult task of applying reinforcement learning to autonomous mobile robots. Conventional reinforcement learning (TRL) techniques have many drawbacks, such as lengthy computation times, intricate control frameworks, a great deal of trial and error searching, and sluggish convergence. In this paper, a modified Spiking Neural Network (SNN) is used to offer a distinct method for autonomous mobile robot learning and control in unexpected surroundings. As a learning algorithm, the suggested model combines dopamine modulation with spike-timing-dependent plasticity (STDP). In order to create more computationally efficient, biologically inspired control systems that are adaptable to changing settings, this work uses the effective and physiologically credible Izhikevich neuron model. This study is primarily focused on creating an algorithm for target tracking in the presence of obstacles. Results show that the SNN trained with three obstacles yielded an impressive 96% success rate for our proposal, with collisions happening in about 4% of the 214 simulated seconds.Keywords: spiking neural network, spike-timing-dependent plasticity, dopamine modulation, reinforcement learning
Procedia PDF Downloads 21270 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs
Authors: André Augusto Ceballos Melo
Abstract:
Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.Keywords: stable diffusion, neural interface, smart prosthetic, augmenting
Procedia PDF Downloads 101269 Use the Null Space to Create Starting Point for Stochastic Programming
Authors: Ghussoun Al-Jeiroudi
Abstract:
Stochastic programming is one of the powerful technique which is used to solve real-life problems. Hence, the data of real-life problems is subject to significant uncertainty. Uncertainty is well studied and modeled by stochastic programming. Each day, problems become bigger and bigger and the need for a tool, which does deal with large scale problems, increase. Interior point method is a perfect tool to solve such problems. Interior point method is widely employed to solve the programs, which arise from stochastic programming. It is an iterative technique, so it is required a starting point. Well design starting point plays an important role in improving the convergence speed. In this paper, we propose a starting point for interior point method for multistage stochastic programming. Usually, the optimal solution of stage k+1 is used as starting point for the stage k. This point has the advantage of being close to the solution of the current program. However, it has a disadvantage; it is not in the feasible region of the current program. So, we suggest to take this point and modifying it. That is by adding to it a vector in the null space of the matrix of the unchanged constraints because the solution will change only in the null space of this matrix.Keywords: interior point methods, stochastic programming, null space, starting points
Procedia PDF Downloads 418268 Numerical Study on Pretensioned Bridge Girder Using Thermal Strain Technique
Authors: Prashant Motwani, Arghadeep Laskar
Abstract:
The transfer of prestress force from prestressing strands to the surrounding concrete is dependent on the bond between the two materials. It is essential to understand the actual bond stress distribution along the transfer length to determine the transfer zone in pre-tensioned concrete. A 3-D nonlinear finite element model has been developed to simulate the transfer of prestress force from steel to concrete in pre-tensioned bridge girders through thermal strain technique using commercially available package ABAQUS. Full-scale bridge girder has been analyzed with thermal strain approach where the damage plasticity constitutive model has been used to model concrete. Parameters such as concrete strain, effective prestress, upward camber and longitudinal stress have been compared with analytical results. The discrepancy between numerical and analytical values was within 20%. The paper also presents a convergence study on mesh density and aspect ratio of the elements to perform the finite element study.Keywords: aspect ratio, bridge girder, centre of gravity of strand, mesh density, finite element model, pretensioned bridge girder
Procedia PDF Downloads 243267 The Convergence of IoT and Machine Learning: A Survey of Real-time Stress Detection System
Authors: Shreyas Gambhirrao, Aditya Vichare, Aniket Tembhurne, Shahuraj Bhosale
Abstract:
In today's rapidly evolving environment, stress has emerged as a significant health concern across different age groups. Stress that isn't controlled, whether it comes from job responsibilities, health issues, or the never-ending news cycle, can have a negative effect on our well-being. The problem is further aggravated by the ongoing connection to technology. In this high-tech age, identifying and controlling stress is vital. In order to solve this health issue, the study focuses on three key metrics for stress detection: body temperature, heart rate, and galvanic skin response (GSR). These parameters along with the Support Vector Machine classifier assist the system to categorize stress into three groups: 1) Stressed, 2) Not stressed, and 3) Moderate stress. Proposed training model, a NodeMCU combined with particular sensors collects data in real-time and rapidly categorizes individuals based on their stress levels. Real-time stress detection is made possible by this creative combination of hardware and software.Keywords: real time stress detection, NodeMCU, sensors, heart-rate, body temperature, galvanic skin response (GSR), support vector machine
Procedia PDF Downloads 72266 Improved Multi-Objective Particle Swarm Optimization Applied to Design Problem
Authors: Kapse Swapnil, K. Shankar
Abstract:
Aiming at optimizing the weight and deflection of cantilever beam subjected to maximum stress and maximum deflection, Multi-objective Particle Swarm Optimization (MOPSO) with Utopia Point based local search is implemented. Utopia point is used to govern the search towards the Pareto Optimal set. The elite candidates obtained during the iterations are stored in an archive according to non-dominated sorting and also the archive is truncated based on least crowding distance. Local search is also performed on elite candidates and the most diverse particle is selected as the global best. This method is implemented on standard test functions and it is observed that the improved algorithm gives better convergence and diversity as compared to NSGA-II in fewer iterations. Implementation on practical structural problem shows that in 5 to 6 iterations, the improved algorithm converges with better diversity as evident by the improvement of cantilever beam on an average of 0.78% and 9.28% in the weight and deflection respectively compared to NSGA-II.Keywords: Utopia point, multi-objective particle swarm optimization, local search, cantilever beam
Procedia PDF Downloads 520265 High Performance Field Programmable Gate Array-Based Stochastic Low-Density Parity-Check Decoder Design for IEEE 802.3an Standard
Authors: Ghania Zerari, Abderrezak Guessoum, Rachid Beguenane
Abstract:
This paper introduces high-performance architecture for fully parallel stochastic Low-Density Parity-Check (LDPC) field programmable gate array (FPGA) based LDPC decoder. The new approach is designed to decrease the decoding latency and to reduce the FPGA logic utilisation. To accomplish the target logic utilisation reduction, the routing of the proposed sub-variable node (VN) internal memory is designed to utilize one slice distributed RAM. Furthermore, a VN initialization, using the channel input probability, is achieved to enhance the decoder convergence, without extra resources and without integrating the output saturated-counters. The Xilinx FPGA implementation, of IEEE 802.3an standard LDPC code, shows that the proposed decoding approach attain high performance along with reduction of FPGA logic utilisation.Keywords: low-density parity-check (LDPC) decoder, stochastic decoding, field programmable gate array (FPGA), IEEE 802.3an standard
Procedia PDF Downloads 297264 A Unified Approach for Naval Telecommunication Architectures
Authors: Y. Lacroix, J.-F. Malbranque
Abstract:
We present a chronological evolution for naval telecommunication networks. We distinguish periods: with or without multiplexers, with switch systems, with federative systems, with medium switching, and with medium switching with wireless networks. This highlights the introduction of new layers and technology in the architecture. These architectures are presented using layer models of transmission, in a unified way, which enables us to integrate pre-existing models. A ship of a naval fleet has internal communications (i.e. applications' networks of the edge) and external communications (i.e. the use of the means of transmission between edges). We propose architectures, deduced from the layer model, which are the point of convergence between the networks on board and the HF, UHF radio, and satellite resources. This modelling allows to consider end-to-end naval communications, and in a more global way, that is from the user on board towards the user on shore, including transmission and networks on the shore side. The new architectures need take care of quality of services for end-to-end communications, the more remote control develops a lot and will do so in the future. Naval telecommunications will be more and more complex and will use more and more advanced technologies, it will thus be necessary to establish clear global communication schemes to grant consistency of the architectures. Our latest model has been implemented in a military naval situation, and serves as the basic architecture for the RIFAN2 network.Keywords: equilibrium beach profile, eastern tombolo of Giens, potential function, erosion
Procedia PDF Downloads 291263 Improved Multi-Channel Separation Algorithm for Satellite-Based Automatic Identification System Signals Based on Artificial Bee Colony and Adaptive Moment Estimation
Authors: Peng Li, Luan Wang, Haifeng Fei, Renhong Xie, Yibin Rui, Shanhong Guo
Abstract:
The applications of satellite-based automatic identification system (S-AIS) pave the road for wide-range maritime traffic monitoring and management. But the coverage of satellite’s view includes multiple AIS self-organizing networks, which leads to the collision of AIS signals from different cells. The contribution of this work is to propose an improved multi-channel blind source separation algorithm based on Artificial Bee Colony (ABC) and advanced stochastic optimization to perform separation of the mixed AIS signals. The proposed approach adopts modified ABC algorithm to get an optimized initial separating matrix, which can expedite the initialization bias correction, and utilizes the Adaptive Moment Estimation (Adam) to update the separating matrix by adjusting the learning rate for each parameter dynamically. Simulation results show that the algorithm can speed up convergence and lead to better performance in separation accuracy.Keywords: satellite-based automatic identification system, blind source separation, artificial bee colony, adaptive moment estimation
Procedia PDF Downloads 186262 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 223261 Finite Element Analysis of Thermally-Induced Bistable Plate Using Four Plate Elements
Authors: Jixiao Tao, Xiaoqiao He
Abstract:
The present study deals with the finite element (FE) analysis of thermally-induced bistable plate using various plate elements. The quadrilateral plate elements include the 4-node conforming plate element based on the classical laminate plate theory (CLPT), the 4-node and 9-node Mindlin plate element based on the first-order shear deformation laminated plate theory (FSDT), and a displacement-based 4-node quadrilateral element (RDKQ-NL20). Using the von-Karman’s large deflection theory and the total Lagrangian (TL) approach, the nonlinear FE governing equations for plate under thermal load are derived. Convergence analysis for four elements is first conducted. These elements are then used to predict the stable shapes of thermally-induced bistable plate. Numerical test shows that the plate element based on FSDT, namely the 4-node and 9-node Mindlin, and the RDKQ-NL20 plate element can predict two stable cylindrical shapes while the 4-node conforming plate predicts a saddles shape. Comparing the simulation results with ABAQUS, the RDKQ-NL20 element shows the best accuracy among all the elements.Keywords: Bistable, finite element method, geometrical nonlinearity, quadrilateral plate elements
Procedia PDF Downloads 220260 Regionalism or Ladder-Up: A Theoretical Perspective of Association of Southeast Asian Nations’ Reactions to Belt and Road Initiative
Authors: Yunqi Wang
Abstract:
As a vital region to the Chinese Belt and Road Initiative (BRI), members of the Association of Southeast Asian Nations (ASEAN) have responded to the grand strategy differently. Some expressed fervent support, while others played the 'hedging' card between great powers. This paper explores the underlying rationale behind such complexity by proposing two theoretical explanations: a Regionalism Hypothesis, where countries respond with hedging, balancing, and bandwagoning behaviours in line with national interests and norm-based 'ASEAN-Way'; and a Ladder-up Hypothesis, where countries consider the initiative as an incentive to remove bottlenecks of climbing up the economic ladder in Rostow's stage of the growth model. By analysing reactions from Myanmar, Laos, Indonesia, and Singapore, two patterns are observed. On an empirical note, the more developed economies are more inclined to the Regionalist explanation. On a theoretical note, there has been a gradual convergence between the two explanations, given the impact of economic globalisation on ASEAN. This paper will contribute to the current theoretical vacancy in the study of ASEAN and BRI by capturing the particular norms shared by this regional entity.Keywords: ASEAN, belt and road initiative, hedging, Rostow's stages of growth, regionalism
Procedia PDF Downloads 117259 Heuristic Search Algorithm (HSA) for Enhancing the Lifetime of Wireless Sensor Networks
Authors: Tripatjot S. Panag, J. S. Dhillon
Abstract:
The lifetime of a wireless sensor network can be effectively increased by using scheduling operations. Once the sensors are randomly deployed, the task at hand is to find the largest number of disjoint sets of sensors such that every sensor set provides complete coverage of the target area. At any instant, only one of these disjoint sets is switched on, while all other are switched off. This paper proposes a heuristic search method to find the maximum number of disjoint sets that completely cover the region. A population of randomly initialized members is made to explore the solution space. A set of heuristics has been applied to guide the members to a possible solution in their neighborhood. The heuristics escalate the convergence of the algorithm. The best solution explored by the population is recorded and is continuously updated. The proposed algorithm has been tested for applications which require sensing of multiple target points, referred to as point coverage applications. Results show that the proposed algorithm outclasses the existing algorithms. It always finds the optimum solution, and that too by making fewer number of fitness function evaluations than the existing approaches.Keywords: coverage, disjoint sets, heuristic, lifetime, scheduling, Wireless sensor networks, WSN
Procedia PDF Downloads 452258 Conventional and Islamic Perspective in Accounting: Potential for Alternative Reporting Framework
Authors: Shibly Abdullah
Abstract:
This paper provides an overview of fundamental philosophical and functional differences in conventional and Islamic accounting. The aim of this research is to undertake a detailed analysis focus on specific illustrations drawn from both these systems and highlight how these differences implicate in recording financial transactions and preparation of financial reports for a range of stakeholders. Accounting as being universally considered as a platform for providing a ‘true and fair’ view of corporate entities can be challenged in the current world view, as the business environment has evolved and transformed significantly. Growth of the non-traditional corporate entity such as Islamic financial institutions, fundamentally questions the applicability of conventional accounting standards in preparation of Shariah-compliant financial reporting. Coupled with this, there are significant concerns about the wider applicability of Islamic accounting standards and framework in order to achieve reporting practices satisfying the information needs generally. Against the backdrop of such a context, this paper raises fundamental question as to how potential convergence could be achieved between these two systems in order to provide users’ a transparent and comparable state of financial information resulting in an alternative framework of financial reporting.Keywords: accounting, conventional accounting, corporate reporting, Islamic accounting
Procedia PDF Downloads 282257 Inverse Heat Conduction Analysis of Cooling on Run-Out Tables
Authors: M. S. Gadala, Khaled Ahmed, Elasadig Mahdi
Abstract:
In this paper, we introduced a gradient-based inverse solver to obtain the missing boundary conditions based on the readings of internal thermocouples. The results show that the method is very sensitive to measurement errors, and becomes unstable when small time steps are used. The artificial neural networks are shown to be capable of capturing the whole thermal history on the run-out table, but are not very effective in restoring the detailed behavior of the boundary conditions. Also, they behave poorly in nonlinear cases and where the boundary condition profile is different. GA and PSO are more effective in finding a detailed representation of the time-varying boundary conditions, as well as in nonlinear cases. However, their convergence takes longer. A variation of the basic PSO, called CRPSO, showed the best performance among the three versions. Also, PSO proved to be effective in handling noisy data, especially when its performance parameters were tuned. An increase in the self-confidence parameter was also found to be effective, as it increased the global search capabilities of the algorithm. RPSO was the most effective variation in dealing with noise, closely followed by CRPSO. The latter variation is recommended for inverse heat conduction problems, as it combines the efficiency and effectiveness required by these problems.Keywords: inverse analysis, function specification, neural net works, particle swarm, run-out table
Procedia PDF Downloads 240256 A Comparative Study of the Maximum Power Point Tracking Methods for PV Systems Using Boost Converter
Authors: M. Doumi, A. Miloudi, A.G. Aissaoui, K. Tahir, C. Belfedal, S. Tahir
Abstract:
The studies on the photovoltaic system are extensively increasing because of a large, secure, essentially exhaustible and broadly available resource as a future energy supply. However, the output power induced in the photovoltaic modules is influenced by an intensity of solar cell radiation, temperature of the solar cells and so on. Therefore, to maximize the efficiency of the photovoltaic system, it is necessary to track the maximum power point of the PV array, for this Maximum Power Point Tracking (MPPT) technique is used. These algorithms are based on the Perturb-Observe, Conductance-Increment and the Fuzzy Logic methods. These techniques vary in many aspects as: simplicity, convergence speed, digital or analogical implementation, sensors required, cost, range of effectiveness, and in other aspects. This paper presents a comparative study of three widely-adopted MPPT algorithms; their performance is evaluated on the energy point of view, by using the simulation tool Simulink®, considering different solar irradiance variations. MPPT using fuzzy logic shows superior performance and more reliable control to the other methods for this application.Keywords: photovoltaic system, MPPT, perturb and observe (P&O), incremental conductance (INC), Fuzzy Logic (FLC)
Procedia PDF Downloads 411255 Human Rights Violation in Modern Society
Authors: Shenouda Salib Hosni Rofail
Abstract:
The interface between development and human rights has long been the subject of scholarly debate. As a result, a set of principles ranging from the right to development to a human rights-based approach to development has been adopted to understand the dynamics between the two concepts. Despite these attempts, the exact link between development and human rights is not yet fully understood. However, the inevitable interdependence between the two concepts and the idea that development efforts must be made while respecting human rights have gained prominence in recent years. On the other hand, the emergence of sustainable development as a widely accepted approach to development goals and policies further complicates this unresolved convergence. The place of sustainable development in the human rights discourse and its role in ensuring the sustainability of development programs require systematic research. The aim of this article is, therefore, to examine the relationship between development and human rights, with a particular focus on the place of the principles of sustainable development in international human rights law. It will continue to examine whether it recognizes the right to sustainable development. Thus, the Article states that the principles of sustainable development are recognized directly or implicitly in various human rights instruments, which is an affirmative answer to the question posed above. Accordingly, this document scrutinizes international and regional human rights instruments, as well as the case law and interpretations of human rights bodies, to support this hypothesis.Keywords: sustainable development, human rights, the right to development, the human rights-based approach to development, environmental rights, economic development, social sustainability human rights protection, human rights violations, workers’ rights, justice, security.
Procedia PDF Downloads 50254 Adopting Flocks of Birds Approach to Predator for Anomalies Detection on Industrial Control Systems
Abstract:
Industrial Control Systems (ICS) such as Supervisory Control And Data Acquisition (SCADA) can be seen in many different critical infrastructures, from nuclear management to utility, medical equipment, power, waste and engine management on ships and planes. The role SCADA plays in critical infrastructure has resulted in a call to secure them. Many lives depend on it for daily activities and the attack vectors are becoming more sophisticated. Hence, the security of ICS is vital as malfunction of it might result in huge risk. This paper describes how the application of Prey Predator (PP) approach in flocks of birds could enhance the detection of malicious activities on ICS. The PP approach explains how these animals in groups or flocks detect predators by following some simple rules. They are not necessarily very intelligent animals but their approach in solving complex issues such as detection through corporation, coordination and communication worth emulating. This paper will emulate flocking behavior seen in birds in detecting predators. The PP approach will adopt six nearest bird approach in detecting any predator. Their local and global bests are based on the individual detection as well as group detection. The PP algorithm was designed following MapReduce methodology that follows a Split Detection Convergence (SDC) approach.Keywords: artificial life, industrial control system (ICS), IDS, prey predator (PP), SCADA, SDC
Procedia PDF Downloads 301