其他分享
首页 > 其他分享> > Local Relation Networks for Image Recognition 英文详解

Local Relation Networks for Image Recognition 英文详解

作者:互联网

Local Relation Network

Adapt filter according to the appearance affinity

Motivation

Meaningful and adaptive spatical aggregation

Humans have a remarkable ability to “see the infinite world with finite means” [26, 2].

  • Recognition-by-components: a theory of human image understanding.
  • W. von Humboldt. On Language: On the Diversity of Human Language Construction and Its Influence on the Mental Development of the Human Species. Cambridge Texts in the History of Philosophy. Cambridge University Press, 1999/1836.

Hierarchical features -> different levels of features

Rather than recognizing how elements can be meaningfully joined together, convolutional layers act as templates

1 filter -> 1 channel
it'a waste of channels.

local relation layer

Convolution Layers and its Evolution

self-enhancement

filter bubble

Given that we prefer to eschew negative experiences, it comes as no surprise that people avoid the immediate psychological discomfort from cognitive dissonance by simply not reading or listening to differing opinions.

This work

Some concepts


Algorithm

Local-Relation Networks are LR-nets

Suppose

\(C = 24, m = 8, k = 7,C/m = 3\)

We observe no accuracy drop with up to 8 channels (default) sharing the same aggregation(for k)

\(H = 160,W = 160\)

In this architecture, receptive field is relevant to the concept of Geometry Prior
Or rather, learned Geometry Prior is used with neighborhood(similar to receptive field.)

k is the neighborhood size


Geometry Prior is analogous to conventional convolution filter
However, geometry prior is considered together with appearance composability, which brings about adaption from input
In other words, the geometry prior is conditioned on the input pixels' correlation.

Design and Analysis

\[W = \text{SoftMax}(\text{GeoPrior}+\text{AppearanceComposability}) \]

They claim that LR(i.e. Local-Relation Layer) can utilize large kernels more effectively

This difference may be due to the representation power of convolution layer being bottlenecked by the number of fixed filters, hence there is no benefit from a larger kernel size.

Weight Sharing across different positions in an image limits the utilization of the representation power of large kernels.

While in previous works the query and key are vectors, in the local relation layer, we use scalars to represent them so that the computation and representation are lightweight.

What is that?

标签:convolution,field,Geometry,text,Image,aggregation,Prior,Relation,Local
来源: https://www.cnblogs.com/zxyfrank/p/16027913.html