ecms_neu_mini.png

Digital Library

of the European Council for Modelling and Simulation

 

Title:

Learned Parameterized Convolutonal Approximation of Image Filters

Authors:

Olga Chaganova, Anton Grigoryev

Published in:

 

 

(2022). ECMS 2022, 36th Proceedings
Edited by: Ibrahim A. Hameed, Agus Hasan, Saleh Abdel-Afou Alaliyat, European Council for Modelling and Simulation.

 

DOI: http://doi.org/10.7148/2022

ISSN: 2522-2422 (ONLINE)

ISSN: 2522-2414 (PRINT)

ISSN: 2522-2430 (CD-ROM)

 

ISBN: 978-3-937436-77-7
ISBN: 978-3-937436-76-0(CD)

 

Communications of the ECMS , Volume 36, Issue 1, June 2022,

Ă…lesund, Norway May 30th - June 3rd, 2022

 

Citation format:

Olga Chaganova, Anton Grigoryev (2022). Learned Parameterized Convolutonal Approximation of Image Filters, ECMS 2022 Proceedings Edited By: Ibrahim A. Hameed, Agus Hasan, Saleh Abdel-Afou Alaliyat, European Council for Modeling and Simulation.

doi:10.7148/2022-0262

DOI:

https://doi.org/10.7148/2022-0262

Abstract:

Multilayer neural networks are considered universal<br>approximators applicable to a wide range of problems. There are quite detailed theoretical and applied studies for fully connected networks, while for convolutional networks the results are more scarce. In this paper, we tested the approximating capability of deep neural networks with typical architectures like ConvNet, ResNet, and UNet as applied to classical image processing algorithms. Canny edge detector and grayscale morphological dilation with the disk structuring element were selected as target algorithms. As we have seen, even relatively lightweight neural models are able to approximate a filter with fixed parameters. Since classical algorithms are parameterized, we considered different approaches to the parameterization of the neural networks and found out that even the simplest of them, adding parameters in the input images channels, works well for low parameter count. Also, we measured an inference time of a neural network approximation and a classical implementation of the grayscale dilation with the disk structuring element. Starting from a certain radius, a neural network works faster than an algorithm even on one core of the CPU without fine-tuning the architecture for performance, thus confirming the viability of ConvNets as a differentiable approximation technique for optimization of classical-based methods.

Full text: