Why convolutions always use odd-numbers as filter_size
作者:互联网
原文地址:https://datascience.stackexchange.com/questions/23183/why-convolutions-always-use-odd-numbers-as-filter-size
The convolution operation, simply put, is combination of element-wise product of two matrices. So long as these two matrices agree in dimensions, there shouldn't be a problem, and so I can understand the motivation behind your query.
A.1. However, the intent of convolution is to encode source data matrix (entire image) in terms of a filter or kernel. More specifically, we are trying to encode the pixels in the neighborhood of anchor/source pixels. Have a look at the figure below: Typically, we consider every pixel of the source image as anchor/source pixel, but we are not constrained to do this. In fact, it is not uncommon to include a stride, where in we anchor/source pixels are separated by a specific number of pixels.
Okay, so what is the source pixel? It is the anchor point at which the kernel is centered and we are encoding all the neighboring pixels, including the anchor/source pixel. Since, the kernel is symmetrically shaped (not symmetric in kernel values), there are equal number (n) of pixel on all sides (4- connectivity) of the anchor pixel. Therefore, whatever this number of pixels maybe, the length of each side of our symmetrically shaped kernel is 2*n+1 (each side of the anchor + the anchor pixel), and therefore filter/kernels are always odd sized.
What if we decided to break with 'tradition' and used asymmetric kernels? You'd suffer aliasing errors, and so we don't do it. We consider the pixel to be the smallest entity, i.e. there is no sub-pixel concept here.
A.2 The boundary problem is dealt with using different approaches: some ignore it, some zero pad it, some mirror reflect it. If you are not going to compute an inverse operation, i.e. deconvolution, and are not interested in perfect reconstruction of original image, then you don't care about either loss of information or injection of noise due to the boundary problem. Typically, the pooling operation (average pooling or max pooling) will remove your boundary artifacts anyway. So, feel free to ignore part of your 'input field', your pooling operation will do so for you.
--
Zen of convolution:
In the old-school signal processing domain, when an input signal was convolved or passed through a filter, there was no way of judging a-prior which components of the convolved/filtered response were relevant/informative and which were not. Consequently, the aim was to preserve signal components (all of it) in these transformations.
These signal components are information. Some components are more informative than others. The only reason for this is that we are interested in extracting higher-level information; Information pertinent towards some semantic classes. Accordingly, those signal components that do not provide the information we are specifically interested in can be pruned out. Therefore, unlike old-school dogmas about convolution/filtering, we are free to pool/prune the convolution response as we feel like. The way we feel like doing so is to rigorously remove all data components that are not contributing towards improving our statistical model.
标签:use,always,signal,anchor,filter,source,pixels,components,pixel 来源: https://www.cnblogs.com/lzhu/p/11855433.html