Warm tip: This article is reproduced from serverfault.com, please click

python-在Pytorch中实现SeparableConv2D

(python - Implement SeparableConv2D in Pytorch)

发布于 2020-12-05 05:47:40

主要目标

具有SeparableConv2D的PyTorch等效项padding = 'same'

from tensorflow.keras.layers import SeparableConv2D
x = SeparableConv2D(64, (1, 16), use_bias = False, padding = 'same')(x)

SeparableConv2D的PyTorch等效项是什么?

消息来源说:

如果groups = nInputPlane,kernel =(K,1),(并且之前是具有groups = 1和kernel =(1,K)的Conv2d层),则它是可分离的。

虽然此消息说:

其核心思想是将完整的卷积酸分解为两步计算:深度卷积和点积。

这是我的尝试:

class SeparableConv2d(nn.Module):
    def __init__(self, in_channels, out_channels, depth, kernel_size, bias=False):
        super(SeparableConv2d, self).__init__()
        self.depthwise = nn.Conv2d(in_channels, out_channels*depth, kernel_size=kernel_size, groups=in_channels, bias=bias)
        self.pointwise = nn.Conv2d(out_channels*depth, out_channels, kernel_size=1, bias=bias)

    def forward(self, x):
        out = self.depthwise(x)
        out = self.pointwise(out)
        return out

这样对吗?这等于tensorflow.keras.layers.SeparableConv2D吗?

那又如何padding = 'same'呢?

在执行此操作时如何确保我的输入和输出大小相同?

我的尝试:

x = F.pad(x, (8, 7, 0, 0), )

因为内核大小为(1,16),所以我分别添加了左填充和右填充(分别为8和7)。这是正确的方法(也是最好的方法)padding = 'same'吗?如何将其放置在SeparableConv2d班级中,并根据给定的输入数据维度大小即时计算?

全部一起

class SeparableConv2d(nn.Module):
    def __init__(self, in_channels, out_channels, depth, kernel_size, bias=False):
        super(SeparableConv2d, self).__init__()
        self.depthwise = nn.Conv2d(in_channels, out_channels*depth, kernel_size=kernel_size, groups=in_channels, bias=bias)
        self.pointwise = nn.Conv2d(out_channels*depth, out_channels, kernel_size=1, bias=bias)

    def forward(self, x):
        out = self.depthwise(x)
        out = self.pointwise(out)
        return out


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.separable_conv = SeparableConv2d(
            in_channels=32, 
            out_channels=64, 
            depth=1, 
            kernel_size=(1,16)
        )
        
    def forward(self, x):
        x = F.pad(x, (8, 7, 0, 0), )
        x = self.separable_conv(x)
        return x

这些代码有问题吗?

Questioner
Jingles
Viewed
0
Poe Dator 2020-12-07 16:52:52

链接的定义大体上是一致的。最好的是在文章中

  • “ Depthwise”(由于不涉及深度,所以不是一个很直观的名称)-是一系列常规2d卷积,仅分别应用于数据层。-“逐点”Conv2d与1x1内核相同

我建议对你的SeparableConv2d班级进行一些更正

  • 无需使用depth参数-与out_channels相同
  • 我将padding设置为1以确保与的输出大小相同kernel=(3,3)如果内核大小不同,请使用与常规Conv2d相同的原理相应地调整填充。你的示例类Net()不再需要-填充在中完成SeparableConv2d

这是更新的代码,应类似于tf.keras.layers.SeparableConv2D实现:

class SeparableConv2d(nn.Module):

def __init__(self, in_channels, out_channels, kernel_size, bias=False):
    super(SeparableConv2d, self).__init__()
    self.depthwise = nn.Conv2d(in_channels, in_channels, kernel_size=kernel_size, 
                               groups=in_channels, bias=bias, padding=1)
    self.pointwise = nn.Conv2d(in_channels, out_channels, 
                               kernel_size=1, bias=bias)

def forward(self, x):
    out = self.depthwise(x)
    out = self.pointwise(out)
    return out