Search results for: V. N. Denisov
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3

Search results for: V. N. Denisov

3 Influence of Shear Deformation on Carbon Onions Stability under High Pressure

Authors: D. P. Evdokimov, A. N. Kirichenko, V. D. Blank, V. N. Denisov, B. A. Kulnitskiy

Abstract:

In this study we investigated the stability of polyhedral carbon onions under influence of shear deformation and high pressures above 43 GPa by means of by transmission electron microscopy (TEM) and Raman spectroscopy (RS). It was found that at pressures up to 29 GPa and shear deformations of 40 degrees the onions are stable. At shear deformation applying at pressures above 30 GPa carbon onions collapsed with formation of amorphous carbon. At pressures above 43 GPa diamond-like carbon (DLC) was obtained.

Keywords: carbon onions, Raman spectroscopy, transmission electron spectroscopy

Procedia PDF Downloads 409
2 Robust Adaptation to Background Noise in Multichannel C-OTDR Monitoring Systems

Authors: Andrey V. Timofeev, Viktor M. Denisov

Abstract:

A robust sequential nonparametric method is proposed for adaptation to background noise parameters for real-time. The distribution of background noise was modelled like to Huber contamination mixture. The method is designed to operate as an adaptation-unit, which is included inside a detection subsystem of an integrated multichannel monitoring system. The proposed method guarantees the given size of a nonasymptotic confidence set for noise parameters. Properties of the suggested method are rigorously proved. The proposed algorithm has been successfully tested in real conditions of a functioning C-OTDR monitoring system, which was designed to monitor railways.

Keywords: guaranteed estimation, multichannel monitoring systems, non-asymptotic confidence set, contamination mixture

Procedia PDF Downloads 394
1 Fast Adjustable Threshold for Uniform Neural Network Quantization

Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev

Abstract:

The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.

Keywords: distillation, machine learning, neural networks, quantization

Procedia PDF Downloads 289