Search results for: stochastic approximation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 958

Search results for: stochastic approximation

598 Design and Implementation of A 10-bit SAR ADC with A Programmable Reference

Authors: Hasmayadi Abdul Majid, Yuzman Yusoff, Noor Shelida Salleh

Abstract:

This paper presents the development of a single-ended 38.5 kS/s 10-bit programmable reference SAR ADC which is realized in MIMOS’s 0.35 µm CMOS process. The design uses a resistive DAC, a dynamic comparator with pre-amplifier and a SAR digital logic to create 10 effective bits ADC. A programmable reference circuitry allows the ADC to operate with different input range from 0.6 V to 2.1 V. A single ended 38.5 kS/s 10-bit programmable reference SAR ADC was proposed and implemented in a 0.35 µm CMOS technology and consumed less than 7.5 mW power with a 3 V supply.

Keywords: successive approximation register analog-to-digital converter, SAR ADC, resistive DAC, programmable reference

Procedia PDF Downloads 518
597 Study of Half-Metallic Ferromagnetism in CeFeO3

Authors: A. Abbad, W. Benstaali

Abstract:

Using first-principles calculations based on the density functional theory and generalize gradient approximation, we predict electronic and magnetic properties of CeFeO3 orthorhombic perovskite. The calculated densities of states presented in this study identify the metallic behavior CeFeO3 when we use the GGA scheme, whereas when we use the GGA+U, we see that its exhibits half-metallic character with an integer magnetic moment of 24μB per formula unit at its equilibrium volume which makes this compound promising candidate for applications in spintronics.

Keywords: CeFeO3, magnetic moment, half-metallic, electronic properties

Procedia PDF Downloads 369
596 Consideration of Uncertainty in Engineering

Authors: A. Mohammadi, M. Moghimi, S. Mohammadi

Abstract:

Engineers need computational methods which could provide solutions less sensitive to the environmental effects, so the techniques should be used which take the uncertainty to account to control and minimize the risk associated with design and operation. In order to consider uncertainty in engineering problem, the optimization problem should be solved for a suitable range of the each uncertain input variable instead of just one estimated point. Using deterministic optimization problem, a large computational burden is required to consider every possible and probable combination of uncertain input variables. Several methods have been reported in the literature to deal with problems under uncertainty. In this paper, different methods presented and analyzed.

Keywords: uncertainty, Monte Carlo simulated, stochastic programming, scenario method

Procedia PDF Downloads 414
595 Evidence of Half-Metallicity in Cubic PrMnO3 Perovskite

Authors: B. Bouadjemi, S. Bentata, W. Benstaali, A. Abbad

Abstract:

The electronic and magnetic properties of the cubic praseodymium oxides perovskites PrMnO3 were calculated using the density functional theory (DFT) with both generalized gradient approximation (GGA) and GGA+U approaches, where U is on-site Coulomb interaction correction. The results show a half-metallic ferromagnetic ground state for PrMnO3 in GGA+U approached, while semi-metallic ferromagnetic character is observed in GGA. The results obtained, make the cubic PrMnO3 a promising candidate for application in spintronics.

Keywords: first-principles, electronic properties, transition metal, materials science

Procedia PDF Downloads 466
594 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 122
593 An Approximation Technique to Automate Tron

Authors: P. Jayashree, S. Rajkumar

Abstract:

With the trend of virtual and augmented reality environments booming to provide a life like experience, gaming is a major tool in supporting such learning environments. In this work, a variant of Voronoi heuristics, employing supervised learning for the TRON game is proposed. The paper discusses the features that would be really useful when a machine learning bot is to be used as an opponent against a human player. Various game scenarios, nature of the bot and the experimental results are provided for the proposed variant to prove that the approach is better than those that are currently followed.

Keywords: artificial Intelligence, automation, machine learning, TRON game, Voronoi heuristics

Procedia PDF Downloads 467
592 Cell-Cell Interactions in Diseased Conditions Revealed by Three Dimensional and Intravital Two Photon Microscope: From Visualization to Quantification

Authors: Satoshi Nishimura

Abstract:

Although much information has been garnered from the genomes of humans and mice, it remains difficult to extend that information to explain physiological and pathological phenomena. This is because the processes underlying life are by nature stochastic and fluctuate with time. Thus, we developed novel "in vivo molecular imaging" method based on single and two-photon microscopy. We visualized and analyzed many life phenomena, including common adult diseases. We integrated the knowledge obtained, and established new models that will serve as the basis for new minimally invasive therapeutic approaches.

Keywords: two photon microscope, intravital visualization, thrombus, artery

Procedia PDF Downloads 373
591 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet

Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel

Abstract:

Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.

Keywords: sanitation systems, nano-membrane toilet, lca, stochastic uncertainty analysis, Monte Carlo simulations, artificial neural network

Procedia PDF Downloads 225
590 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces

Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov

Abstract:

The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.

Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms

Procedia PDF Downloads 217
589 Global Stability Of Nonlinear Itô Equations And N. V. Azbelev's W-method

Authors: Arcady Ponosov., Ramazan Kadiev

Abstract:

The work studies the global moment stability of solutions of systems of nonlinear differential Itô equations with delays. A modified regularization method (W-method) for the analysis of various types of stability of such systems, based on the choice of the auxiliaryequations and applications of the theory of positive invertible matrices, is proposed and justified. Development of this method for deterministic functional differential equations is due to N.V. Azbelev and his students. Sufficient conditions for the moment stability of solutions in terms of the coefficients for sufficiently general as well as specific classes of Itô equations are given.

Keywords: asymptotic stability, delay equations, operator methods, stochastic noise

Procedia PDF Downloads 224
588 Handshake Algorithm for Minimum Spanning Tree Construction

Authors: Nassiri Khalid, El Hibaoui Abdelaaziz et Hajar Moha

Abstract:

In this paper, we introduce and analyse a probabilistic distributed algorithm for a construction of a minimum spanning tree on network. This algorithm is based on the handshake concept. Firstly, each network node is considered as a sub-spanning tree. And at each round of the execution of our algorithm, a sub-spanning trees are merged. The execution continues until all sub-spanning trees are merged into one. We analyze this algorithm by a stochastic process.

Keywords: Spanning tree, Distributed Algorithm, Handshake Algorithm, Matching, Probabilistic Analysis

Procedia PDF Downloads 658
587 Stochastic Repair and Replacement with a Single Repair Channel

Authors: Mohammed A. Hajeeh

Abstract:

This paper examines the behavior of a system, which upon failure is either replaced with certain probability p or imperfectly repaired with probability q. The system is analyzed using Kolmogorov's forward equations method; the analytical expression for the steady state availability is derived as an indicator of the system’s performance. It is found that the analysis becomes more complex as the number of imperfect repairs increases. It is also observed that the availability increases as the number of states and replacement probability increases. Using such an approach in more complex configurations and in dynamic systems is cumbersome; therefore, it is advisable to resort to simulation or heuristics. In this paper, an example is provided for demonstration.

Keywords: repairable models, imperfect, availability, exponential distribution

Procedia PDF Downloads 287
586 Solutions to Probabilistic Constrained Optimal Control Problems Using Concentration Inequalities

Authors: Tomoaki Hashimoto

Abstract:

Recently, optimal control problems subject to probabilistic constraints have attracted much attention in many research field. Although probabilistic constraints are generally intractable in optimization problems, several methods haven been proposed to deal with probabilistic constraints. In most methods, probabilistic constraints are transformed to deterministic constraints that are tractable in optimization problems. This paper examines a method for transforming probabilistic constraints into deterministic constraints for a class of probabilistic constrained optimal control problems.

Keywords: optimal control, stochastic systems, discrete-time systems, probabilistic constraints

Procedia PDF Downloads 278
585 Curve Designing Using an Approximating 4-Point C^2 Ternary Non-Stationary Subdivision Scheme

Authors: Muhammad Younis

Abstract:

A ternary 4-point approximating non-stationary subdivision scheme has been introduced that generates the family of $C^2$ limiting curves. The theory of asymptotic equivalence is being used to analyze the convergence and smoothness of the scheme. The comparison of the proposed scheme has been demonstrated using different examples with the existing 4-point ternary approximating schemes, which shows that the limit curves of the proposed scheme behave more pleasantly and can generate conic sections as well.

Keywords: ternary, non-stationary, approximation subdivision scheme, convergence and smoothness

Procedia PDF Downloads 477
584 A Multistep Broyden’s-Type Method for Solving Systems of Nonlinear Equations

Authors: M. Y. Waziri, M. A. Aliyu

Abstract:

The paper proposes an approach to improve the performance of Broyden’s method for solving systems of nonlinear equations. In this work, we consider the information from two preceding iterates rather than a single preceding iterate to update the Broyden’s matrix that will produce a better approximation of the Jacobian matrix in each iteration. The numerical results verify that the proposed method has clearly enhanced the numerical performance of Broyden’s Method.

Keywords: mulit-step Broyden, nonlinear systems of equations, computational efficiency, iterate

Procedia PDF Downloads 638
583 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 392
582 A Simplified Distribution for Nonlinear Seas

Authors: M. A. Tayfun, M. A. Alkhalidi

Abstract:

The exact theoretical expression describing the probability distribution of nonlinear sea-surface elevations derived from the second-order narrowband model has a cumbersome form that requires numerical computations, not well-disposed to theoretical or practical applications. Here, the same narrowband model is re-examined to develop a simpler closed-form approximation suitable for theoretical and practical applications. The salient features of the approximate form are explored, and its relative validity is verified with comparisons to other readily available approximations, and oceanic data.

Keywords: ocean waves, probability distributions, second-order nonlinearities, skewness coefficient, wave steepness

Procedia PDF Downloads 432
581 Solving the Transportation Problem for Warehouses and Dealers in Bangalore City

Authors: S. Aditya, K. T. Nideesh, N. Guruprasad

Abstract:

Being a subclass of linear programing problem, the Transportation Problem is a classic Operations Research problem where the objective is to determine the schedule for transporting goods from source to destination in a way that minimizes the shipping cost while satisfying supply and demand constraints. In this paper, we are representing the transportation problem for various warehouses along with various dealers situated in Bangalore city to reduce the transportation cost incurred by them as of now. The problem is solved by obtaining the Initial Basic feasible Solution through various methods and further proceeding to obtain optimal cost.

Keywords: NW method, optimum utilization, transportation problem, Vogel’s approximation method

Procedia PDF Downloads 437
580 Searching the Efficient Frontier for the Coherent Covering Location Problem

Authors: Felipe Azocar Simonet, Luis Acosta Espejo

Abstract:

In this article, we will try to find an efficient boundary approximation for the bi-objective location problem with coherent coverage for two levels of hierarchy (CCLP). We present the mathematical formulation of the model used. Supported efficient solutions and unsupported efficient solutions are obtained by solving the bi-objective combinatorial problem through the weights method using a Lagrangean heuristic. Subsequently, the results are validated through the DEA analysis with the GEM index (Global efficiency measurement).

Keywords: coherent covering location problem, efficient frontier, lagragian relaxation, data envelopment analysis

Procedia PDF Downloads 333
579 Dominant Correlation Effects in Atomic Spectra

Authors: Hubert Klar

Abstract:

High double excitation of two-electron atoms has been investigated using hyperpherical coordinates within a modified adiabatic expansion technique. This modification creates a novel fictitious force leading to a spontaneous exchange symmetry breaking at high double excitation. The Pauli principle must therefore be regarded as approximation valid only at low excitation energy. Threshold electron scattering from high Rydberg states shows an unexpected time reversal symmetry breaking. At threshold for double escape we discover a broad (few eV) Cooper pair.

Keywords: correlation, resonances, threshold ionization, Cooper pair

Procedia PDF Downloads 348
578 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 292
577 Block Implicit Adams Type Algorithms for Solution of First Order Differential Equation

Authors: Asabe Ahmad Tijani, Y. A. Yahaya

Abstract:

The paper considers the derivation of implicit Adams-Moulton type method, with k=4 and 5. We adopted the method of interpolation and collocation of power series approximation to generate the continuous formula which was evaluated at off-grid and some grid points within the step length to generate the proposed block schemes, the schemes were investigated and found to be consistent and zero stable. Finally, the methods were tested with numerical experiments to ascertain their level of accuracy.

Keywords: Adam-Moulton Type (AMT), off-grid, block method, consistent and zero stable

Procedia PDF Downloads 482
576 Normalized Compression Distance Based Scene Alteration Analysis of a Video

Authors: Lakshay Kharbanda, Aabhas Chauhan

Abstract:

In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.

Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error

Procedia PDF Downloads 340
575 Application of Regularized Low-Rank Matrix Factorization in Personalized Targeting

Authors: Kourosh Modarresi

Abstract:

The Netflix problem has brought the topic of “Recommendation Systems” into the mainstream of computer science, mathematics, and statistics. Though much progress has been made, the available algorithms do not obtain satisfactory results. The success of these algorithms is rarely above 5%. This work is based on the belief that the main challenge is to come up with “scalable personalization” models. This paper uses an adaptive regularization of inverse singular value decomposition (SVD) that applies adaptive penalization on the singular vectors. The results show far better matching for recommender systems when compared to the ones from the state of the art models in the industry.

Keywords: convex optimization, LASSO, regression, recommender systems, singular value decomposition, low rank approximation

Procedia PDF Downloads 455
574 Density functional (DFT), Study of the Structural and Phase Transition of ThC and ThN: LDA vs GGA Computational

Authors: Hamza Rekab Djabri, Salah Daoud

Abstract:

The present paper deals with the computational of structural and electronic properties of ThC and ThN compounds using density functional theory within generalized-gradient (GGA) apraximation and local density approximation (LDA). We employ the full potential linear muffin-tin orbitals (FP-LMTO) as implemented in the Lmtart code. We have used to examine structure parameter in eight different structures such as in NaCl (B1), CsCl (B2), ZB (B3), NiAs (B8), PbO (B10), Wurtzite (B4) , HCP (A3) βSn (A5) structures . The equilibrium lattice parameter, bulk modulus, and its pressure derivative were presented for all calculated phases. The calculated ground state properties are in good agreement with available experimental and theoretical results.

Keywords: DFT, GGA, LDA, properties structurales, ThC, ThN

Procedia PDF Downloads 98
573 Effective Medium Approximations for Modeling Ellipsometric Responses from Zinc Dialkyldithiophosphates (ZDDP) Tribofilms Formed on Sliding Surfaces

Authors: Maria Miranda-Medina, Sara Salopek, Andras Vernes, Martin Jech

Abstract:

Sliding lubricated surfaces induce the formation of tribofilms that reduce friction, wear and prevent large-scale damage of contact parts. Engine oils and lubricants use antiwear and antioxidant additives such as zinc dialkyldithiophosphate (ZDDP) from where protective tribofilms are formed by degradation. The ZDDP tribofilms are described as a two-layer structure composed of inorganic polymer material. On the top surface, the long chain polyphosphate is a zinc phosphate and in the bulk, the short chain polyphosphate is a mixed Fe/Zn phosphate with a gradient concentration. The polyphosphate chains are partially adherent to steel surface through a sulfide and work as anti-wear pads. In this contribution, ZDDP tribofilms formed on gray cast iron surfaces are studied. The tribofilms were generated in a reciprocating sliding tribometer with a piston ring-cylinder liner configuration. Fully formulated oil of SAE grade 5W-30 was used as lubricant during two tests at 40Hz and 50Hz. For the estimation of the tribofilm thicknesses, spectroscopic ellipsometry was used due to its high accuracy and non-destructive nature. Ellipsometry works under an optical principle where the change in polarisation of light reflected by the surface, is associated with the refractive index of the surface material or to the thickness of the layer deposited on top. Ellipsometrical responses derived from tribofilms are modelled by effective medium approximation (EMA), which includes the refractive index of involved materials, homogeneity of the film and thickness. The materials composition was obtained from x-ray photoelectron spectroscopic studies, where the presence of ZDDP, O and C was confirmed. From EMA models it was concluded that tribofilms formed at 40 Hz are thicker and more homogeneous than the ones formed at 50 Hz. In addition, the refractive index of each material is mixed to derive an effective refractive index that describes the optical composition of the tribofilm and exhibits a maximum response in the UV range, being a characteristic of glassy semitransparent films.

Keywords: effective medium approximation, reciprocating sliding tribometer, spectroscopic ellipsometry, zinc dialkyldithiophosphate

Procedia PDF Downloads 251
572 Using Gaussian Process in Wind Power Forecasting

Authors: Hacene Benkhoula, Mohamed Badreddine Benabdella, Hamid Bouzeboudja, Abderrahmane Asraoui

Abstract:

The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.

Keywords: wind power, Gaussien process, modelling, forecasting

Procedia PDF Downloads 417
571 Maintenance Optimization for a Multi-Component System Using Factored Partially Observable Markov Decision Processes

Authors: Ipek Kivanc, Demet Ozgur-Unluakin

Abstract:

Over the past years, technological innovations and advancements have played an important role in the industrial world. Due to technological improvements, the degree of complexity of the systems has increased. Hence, all systems are getting more uncertain that emerges from increased complexity, resulting in more cost. It is challenging to cope with this situation. So, implementing efficient planning of maintenance activities in such systems are getting more essential. Partially Observable Markov Decision Processes (POMDPs) are powerful tools for stochastic sequential decision problems under uncertainty. Although maintenance optimization in a dynamic environment can be modeled as such a sequential decision problem, POMDPs are not widely used for tackling maintenance problems. However, they can be well-suited frameworks for obtaining optimal maintenance policies. In the classical representation of the POMDP framework, the system is denoted by a single node which has multiple states. The main drawback of this classical approach is that the state space grows exponentially with the number of state variables. On the other side, factored representation of POMDPs enables to simplify the complexity of the states by taking advantage of the factored structure already available in the nature of the problem. The main idea of factored POMDPs is that they can be compactly modeled through dynamic Bayesian networks (DBNs), which are graphical representations for stochastic processes, by exploiting the structure of this representation. This study aims to demonstrate how maintenance planning of dynamic systems can be modeled with factored POMDPs. An empirical maintenance planning problem of a dynamic system consisting of four partially observable components deteriorating in time is designed. To solve the empirical model, we resort to Symbolic Perseus solver which is one of the state-of-the-art factored POMDP solvers enabling approximate solutions. We generate some more predefined policies based on corrective or proactive maintenance strategies. We execute the policies on the empirical problem for many replications and compare their performances under various scenarios. The results show that the computed policies from the POMDP model are superior to the others. Acknowledgment: This work is supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) under grant no: 117M587.

Keywords: factored representation, maintenance, multi-component system, partially observable Markov decision processes

Procedia PDF Downloads 134
570 A Cohort and Empirical Based Multivariate Mortality Model

Authors: Jeffrey Tzu-Hao Tsai, Yi-Shan Wong

Abstract:

This article proposes a cohort-age-period (CAP) model to characterize multi-population mortality processes using cohort, age, and period variables. Distinct from the factor-based Lee-Carter-type decomposition mortality model, this approach is empirically based and includes the age, period, and cohort variables into the equation system. The model not only provides a fruitful intuition for explaining multivariate mortality change rates but also has a better performance in forecasting future patterns. Using the US and the UK mortality data and performing ten-year out-of-sample tests, our approach shows smaller mean square errors in both countries compared to the models in the literature.

Keywords: longevity risk, stochastic mortality model, multivariate mortality rate, risk management

Procedia PDF Downloads 53
569 An Experimental Investigation of the Cognitive Noise Influence on the Bistable Visual Perception

Authors: Alexander E. Hramov, Vadim V. Grubov, Alexey A. Koronovskii, Maria K. Kurovskaуa, Anastasija E. Runnova

Abstract:

The perception of visual signals in the brain was among the first issues discussed in terms of multistability which has been introduced to provide mechanisms for information processing in biological neural systems. In this work the influence of the cognitive noise on the visual perception of multistable pictures has been investigated. The study includes an experiment with the bistable Necker cube illusion and the theoretical background explaining the obtained experimental results. In our experiments Necker cubes with different wireframe contrast were demonstrated repeatedly to different people and the probability of the choice of one of the cubes projection was calculated for each picture. The Necker cube was placed at the middle of a computer screen as black lines on a white background. The contrast of the three middle lines centered in the left middle corner was used as one of the control parameter. Between two successive demonstrations of Necker cubes another picture was shown to distract attention and to make a perception of next Necker cube more independent from the previous one. Eleven subjects, male and female, of the ages 20 through 45 were studied. The choice of the Necker cube projection was detected with the Electroencephalograph-recorder Encephalan-EEGR-19/26, Medicom MTD. To treat the experimental results we carried out theoretical consideration using the simplest double-well potential model with the presence of noise that led to the Fokker-Planck equation for the probability density of the stochastic process. At the first time an analytical solution for the probability of the selection of one of the Necker cube projection for different values of wireframe contrast have been obtained. Furthermore, having used the results of the experimental measurements with the help of the method of least squares we have calculated the value of the parameter corresponding to the cognitive noise of the person being studied. The range of cognitive noise parameter values for studied subjects turned to be [0.08; 0.55]. It should be noted, that experimental results have a good reproducibility, the same person being studied repeatedly another day produces very similar data with very close levels of cognitive noise. We found an excellent agreement between analytically deduced probability and the results obtained in the experiment. A good qualitative agreement between theoretical and experimental results indicates that even such a simple model allows simulating brain cognitive dynamics and estimating important cognitive characteristic of the brain, such as brain noise.

Keywords: bistability, brain, noise, perception, stochastic processes

Procedia PDF Downloads 445