编程语言
首页 > 编程语言> > python – 卷积层特征映射的特殊函数

python – 卷积层特征映射的特殊函数

作者:互联网

简而言之:

如何将特征映射从Keras中定义的卷积层传递到特殊函数(区域提议器),然后传递给其他Keras层(例如Softmax分类器)?

长:

我正试图在Keras中实现类似Fast R-CNN(不是更快的R-CNN)的东西.这是因为我正在尝试实现自定义架构,如下图所示:

from "TextMaps" by Tom Gogar

这是上图的代码(不包括候选输入):

from keras.layers import Input, Dense, Conv2D, ZeroPadding2D, MaxPooling2D, BatchNormalization, concatenate
from keras.activations import relu, sigmoid, linear
from keras.initializers import RandomUniform, Constant, TruncatedNormal, RandomNormal, Zeros

#  Network 1, Layer 1
screenshot = Input(shape=(1280, 1280, 0),
                   dtype='float32',
                   name='screenshot')
conv1 = Conv2D(filters=96,
               kernel_size=11,
               strides=(4, 4),
               activation=relu,
               padding='same')(screenshot)
pooling1 = MaxPooling2D(pool_size=(3, 3),
                        strides=(2, 2),
                        padding='same')(conv1)
normalized1 = BatchNormalization()(pooling1)  # https://stats.stackexchange.com/questions/145768/importance-of-local-response-normalization-in-cnn

# Network 1, Layer 2

conv2 = Conv2D(filters=256,
               kernel_size=5,
               activation=relu,
               padding='same')(normalized1)
normalized2 = BatchNormalization()(conv2)
conv3 = Conv2D(filters=384,
               kernel_size=3,
               activation=relu,
               padding='same',
               kernel_initializer=RandomNormal(stddev=0.01),
               bias_initializer=Constant(value=0.1))(normalized2)

# Network 2, Layer 1

textmaps = Input(shape=(160, 160, 128),
                 dtype='float32',
                 name='textmaps')
txt_conv1 = Conv2D(filters=48,
                   kernel_size=1,
                   activation=relu,
                   padding='same',
                   kernel_initializer=RandomNormal(stddev=0.01),
                   bias_initializer=Constant(value=0.1))(textmaps)

# (Network 1 + Network 2), Layer 1

merged = concatenate([conv3, txt_conv1], axis=-1)
merged_padding = ZeroPadding2D(padding=2, data_format=None)(merged)
merged_conv = Conv2D(filters=96,
                     kernel_size=5,
                     activation=relu, padding='same',
                     kernel_initializer=RandomNormal(stddev=0.01),
                     bias_initializer=Constant(value=0.1))(merged_padding)

如上所示,我正在尝试构建的网络的最后一步是ROI Pooling,这是在R-CNN中以这种方式完成的:

from main publication of Fast R-CNN on Arxiv

现在there is a code for ROI Pooling layer in Keras,但是到那一层我需要传递区域提案.您可能已经知道,区域提案通常由称为选择性搜索的算法完成,which is already implemented in the Python.

问题:

选择性搜索可以轻松获取正常图像,并为我们提供以下区域建议:

from selective search Github page

现在的问题是,我应该从图层merged_conv1传递一个特征图而不是图像,如上面的代码所示:

merged_conv = Conv2D(filters=96,
                     kernel_size=5,
                     activation=relu, padding='same',
                     kernel_initializer=RandomNormal(stddev=0.01),
                     bias_initializer=Constant(value=0.1))(merged_padding)

上面的图层只是对形状的引用,所以显然它不适用于selectivesearch:

>>> import selectivesearch
>>> selectivesearch.selective_search(merged_conv, scale=500, sigma=0.9, min_size=10)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/somepath/selectivesearch.py", line 262, in selective_search
    assert im_orig.shape[2] == 3, "3ch image is expected"
AssertionError: 3ch image is expected

我想我应该这样做:

from keras import Model
import numpy as np
import cv2
import selectivesearch
img = cv2.imread('someimage.jpg')
img = img.reshape(-1, 1280, 1280, 3)
textmaps = np.ones(-1, 164, 164, 128)  # Just for example
model = Model(inputs=[screenshot, textmaps], outputs=merged_conv)
model.compile(optimizer='sgd', loss='binary_crossentropy', metrics=['accuracy'])
feature_maps = np.reshape(model.predict([img, textmaps]), (96, 164, 164))
feature_map_1 = feature_maps[0][0]
img_lbl, regions = selectivesearch.selective_search(feature_map_1, scale=500, sigma=0.9, min_size=10)

但那么如果我想添加让我们说softmax分类器接受“regions”变量? (顺便说一下,我知道选择性搜索除了输入频道3之外还有其他任何问题,但这与问题无关)

题:

区域建议(使用选择性搜索)是神经网络的重要组成部分,如何修改它以便从卷积层merged_conv中获取特征映射(激活)?

也许我应该创建自己的Keras图层?

解决方法:

据我所知,选择性搜索需要输入并返回不同的(H,W)补丁.所以在你的情况下,feature-map是dims(164,164,96),你可以假设(164,164)作为选择性搜索的输入,它会给你n个补丁,exp为(H1,W1),( H2,W2),….所以你现在可以将所有通道原样附加到那个补丁,所以它变成了dims(H1,W1,96),(H2,W2,96),…… ..

注意:但这样做也有不利之处.选择性搜索算法使用它在网格中打破图像的策略,然后根据对象的热图重新连接这些补丁.您将无法在要素图上执行此操作.但是你可以使用随机搜索方法,它可能很有用.

标签:python,tensorflow,deep-learning,keras,conv-neural-network
来源: https://codeday.me/bug/20190705/1387232.html