您的位置:

ResNet改进综述

一、ResNet改进版

ResNet(Residual Network)是一种深度神经网络结构,它在ImageNet数据集上取得了巨大的成功。然而,由于残差网络需要大量的计算资源,同时仍存在着一些问题,因此研究者对残差网络进行了改进。其中,一种改进版的ResNet是对残差块进行了改进。

改进后的ResNet使用了可分离卷积,这种卷积可以将卷积分为两步:depthwise和pointwise卷积。它们可以减少模型参数和计算量,同时具有更好的表示能力。

以下是改进版的ResNet代码实现:


import torch
import torch.nn as nn

class SeparableConv2d(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, bias=False):
        super(SeparableConv2d, self).__init__()
        
        self.conv1 = nn.Conv2d(in_channels, in_channels, kernel_size, stride, padding, dilation, groups=in_channels, bias=bias)
        self.pointwise = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0, dilation=1, groups=1, bias=bias)

    def forward(self, x):
        x = self.conv1(x)
        x = self.pointwise(x)
        return x

class BasicBlock(nn.Module):
    def __init__(self, in_planes, out_planes, stride=1, downsample=None):
        super(BasicBlock, self).__init__()
        self.stride = stride
        self.downsample = downsample
        
        self.conv1 = nn.Sequential(
            nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False),
            nn.BatchNorm2d(out_planes),
            nn.ReLU(inplace=True),
            SeparableConv2d(out_planes, out_planes, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(out_planes),
        )
        self.conv2 = nn.Sequential(
            SeparableConv2d(out_planes, out_planes, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(out_planes),
        )
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        residual = x

        out = self.conv1(x)
        out = self.conv2(out)

        if self.downsample is not None:
            residual = self.downsample(x)

        out += residual
        out = self.relu(out)
        
        return out

class ResNet(nn.Module):
    def __init__(self, block, layers, num_classes):
        super(ResNet, self).__init__()
        
        self.in_planes = 64
        
        self.conv1 = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
        )
        self.layer1 = self._make_layer(block, 64, layers[0], stride=1)
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Linear(512, num_classes)

    def _make_layer(self, block, planes, blocks, stride=1):
        downsample = None
        if stride != 1 or self.in_planes != planes:
            downsample = nn.Sequential(
                nn.Conv2d(self.in_planes, planes, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes),
            )
        
        layers = []
        layers.append(block(self.in_planes, planes, stride, downsample))
        self.in_planes = planes
        
        for i in range(1, blocks):
            layers.append(block(self.in_planes, planes))

        return nn.Sequential(*layers)
    
    def forward(self, x):
        x = self.conv1(x)
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)
        x = self.avgpool(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)

        return x

二、ResNet改进CRNN

CRNN(Convolutional Recurrent Neural Network)是一种基于深度神经网络的语音识别模型,其主要结构由卷积神经网络和循环神经网络组成。对于CRNN,研究者们对ResNet的改进主要是引入来自空间和时间领域的特征结构,提升模型对语音信号的建模能力和特征表示能力。

具体来说,这种改进是通过将残差块中的1x1卷积替换为包含多项式函数的卷积实现的,以增加对空间特征的提取能力;同时,还引入了一种“多通道门控一维卷积”结构,将时间领域的信息引入模型中。

以下是ResNet改进CRNN的代码实现:


import torch
import torch.nn as nn

class MultiBranchBlock(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size=(3,1), stride=(1,1), padding=(1,0)):
        super(MultiBranchBlock, self).__init__()

        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)
        self.bn = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)
        
    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        x = self.relu(x)
        
        return x

class ChannelGate(nn.Module):
    def __init__(self, gate_channels, reduction_ratio=16, pool_types=['avg', 'max']):
        super(ChannelGate, self).__init__()
        
        self.gate_channels = gate_channels
        self.mlp = nn.Sequential(
            nn.Linear(gate_channels, gate_channels // reduction_ratio),
            nn.ReLU(),
            nn.Linear(gate_channels // reduction_ratio, gate_channels)
        )   
        
        self.pool_types = pool_types
        
    def forward(self, x):
        channel_att_sum = None
        channel_att_max = None
        
        for pool_type in self.pool_types:
            if pool_type == 'avg':
                avg_channel_att = torch.mean(x, dim=(2,3), keepdim=True)
                channel_att = self.mlp(avg_channel_att.view(x.size(0), self.gate_channels))
            elif pool_type == 'max':
                max_channel_att = torch.max(x, dim=(2,3), keepdim=True)[0]
                channel_att = self.mlp(max_channel_att.view(x.size(0), self.gate_channels))
                
            channel_att = torch.sigmoid(channel_att).view(x.size(0), x.size(1), 1, 1)
            if channel_att_sum is None:
                channel_att_sum = channel_att
            else:
                channel_att_sum += channel_att
            if channel_att_max is None:
                channel_att_max = channel_att
            else:
                channel_att_max = torch.max(channel_att_max, channel_att)
        
        return channel_att_sum, channel_att_max

class CRNN(nn.Module):
    def __init__(self, in_channels, num_classes):
        super(CRNN, self).__init__()

        self.cnn = nn.Sequential(
            nn.Conv2d(in_channels, 32, kernel_size=(41,11), stride=(2,2), padding=(20,5), bias=False),
            nn.BatchNorm2d(32),
            nn.Hardtanh(min_val=0, max_val=20, inplace=True),

            nn.Conv2d(32, 32, kernel_size=(21,11), stride=(2,1), padding=(10,5), bias=False),
            nn.BatchNorm2d(32),
            nn.Hardtanh(min_val=0, max_val=20, inplace=True),
            
            nn.Conv2d(32, 32, kernel_size=(21,11), stride=(2,1), padding=(10,5), bias=False),
            nn.BatchNorm2d(32),
            nn.Hardtanh(min_val=0, max_val=20, inplace=True),
        )
        
        self.resblocks = nn.Sequential(
            nn.Sequential(
                ChannelGate(32, reduction_ratio=16),
                nn.Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
                nn.BatchNorm2d(32),
                nn.ReLU(inplace=True),
            ),
            nn.Sequential(
                ChannelGate(32, reduction_ratio=16),
                nn.Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
                nn.BatchNorm2d(32),
                nn.ReLU(inplace=True),
            ),
            nn.Sequential(
                ChannelGate(32, reduction_ratio=16),
                nn.Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
                nn.BatchNorm2d(32),
                nn.ReLU(inplace=True),
            ),
            nn.Sequential(
                ChannelGate(32, reduction_ratio=16),
                nn.Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
                nn.BatchNorm2d(32),
                nn.ReLU(inplace=True),
            ),
            nn.Sequential(
                ChannelGate(32, reduction_ratio=16),
                nn.Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False),
                nn.BatchNorm2d(32),
                nn.ReLU(inplace=True),
            ),
        )
        
        self.rnn = nn.LSTM(input_size=32, hidden_size=1024, num_layers=5, dropout=0.2, bidirectional=True)
        self.fc = nn.Linear(2048, num_classes)

    def forward(self, x):
        x = self.cnn(x)
        x = self.resblocks(x)
        x = x.permute(3,0,1,2).contiguous()
        x = x.view(x.size(0), x.size(1), -1).permute(1,0,2).contiguous()
        x, _ = self.rnn(x)
        x = self.fc(x.mean(0))
        return x

三、ResNet改进方法

除了改进结构之外,还有一些其他的方法可以提升ResNet的性能。

其中,一种重要的方法是使用更加广泛的数据增强,例如Mixup和CutMix。这些方法可以使模型更加健壮,同时减轻过拟合的程度。

此外,还有一种方法是使用自监督学习来预训练ResNet。自监督学习能够使用无标注的数据来提取更好的特征,从而提升模型性能。此外,还可以使用半监督学习来利用有标注和无标注的数据来训练模型,提升模型性能。

四、ResNet改进太难了

尽管ResNet的改进方法非常多,但是实际上改进ResNet是非常困难的。首先,深度神经网络的结构非常复杂,难以理解;其次,改进ResNet需要大量的计算资源和数据资源,需要进行耗时的实验。

因此,研究者们需要在理论和实践方面花费大量的时间和精力来进行改进。同时,他们需要积极探索新的方法和技术,以寻找更加高效和有效的ResNet改进方法。

五、ResNet改进网络

网络的深度和宽度是影响ResNet性能的两个重要因素。尽管在一定程度上增加深度和宽度可以提升模型性能,但是这也带来了更多的计算资源和训练难度。

因此,研究者们提出了一些改进ResNet网络的方法。其中,一种方法是使用ResNeXt网络,这种网络通过在残差块中使用多条路径来增加网络的宽度。

另一种方法是使用DenseNet网络,这种网络结构可以通过连接不同层之