您的位置:

GoogleNet详解

一、概述

GoogleNet是2014年Google发布的深度神经网络,在ImageNet图像识别任务中表现优异。它是Inception系列模型的第一代,以其高效的参数和计算量成为当时最先进的模型。

GoogleNet并不是一般CNN的平铺式层次,而是通过并行的Inception模块十分复杂地组成。Inception模块通过采用1x1、3x3、5x5的卷积核,从不同感受野大小的采样中得到不同网络层的特征表达,通过concatenate将特征图并串起来,从而获得更好的特征表达能力。

二、Inception模块

Inception模块最核心的思想是并行计算。在不同大小感受野下通过不同大小的卷积核得到的特征图,然后通过concatenate并表达出完整的特征图,从而获得更高维的特征信息。

示例代码如下:

from keras.layers import Conv2D, MaxPooling2D, concatenate

def InceptionModule(x, nb_filter):
    branch1x1 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x)

    branch3x3 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x)
    branch3x3 = Conv2D(nb_filter, (3, 3), padding='same', activation='relu')(branch3x3)

    branch5x5 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x)
    branch5x5 = Conv2D(nb_filter, (5, 5), padding='same', activation='relu')(branch5x5)

    branch_MaxPooling = MaxPooling2D(pool_size=(3, 3), strides=(1, 1), padding='same')(x)
    branch_MaxPooling = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(branch_MaxPooling)

    branches = [branch1x1, branch3x3, branch5x5, branch_MaxPooling]
    out = concatenate(branches, axis=-1)
    return out

三、完整模型架构

GoogleNet共22层,前15层采用常规的卷积、池化和归一化层。其后的分类层采用了全局平均池化、dropout和softmax,使得提取的特征图更具鲁棒性。完整模型代码如下:

from keras.layers import Input, Dense, Dropout, Flatten, concatenate
from keras.layers.convolutional import Conv2D, MaxPooling2D, AveragePooling2D
from keras.layers.normalization import BatchNormalization
from keras.models import Model

def InceptionModule(x, nb_filter):
    branch1x1 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x)

    branch3x3 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x)
    branch3x3 = Conv2D(nb_filter, (3, 3), padding='same', activation='relu')(branch3x3)

    branch5x5 = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(x)
    branch5x5 = Conv2D(nb_filter, (5, 5), padding='same', activation='relu')(branch5x5)

    branch_MaxPooling = MaxPooling2D(pool_size=(3, 3), strides=(1, 1), padding='same')(x)
    branch_MaxPooling = Conv2D(nb_filter, (1, 1), padding='same', activation='relu')(branch_MaxPooling)

    branches = [branch1x1, branch3x3, branch5x5, branch_MaxPooling]
    out = concatenate(branches, axis=-1)
    return out

inputs = Input(shape=(224, 224, 3))

x = Conv2D(64, (7, 7), strides=(2, 2), padding='same', activation='relu')(inputs)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x)

x = Conv2D(64, (1, 1), strides=(1, 1), padding='same', activation='relu')(x)
x = Conv2D(192, (3, 3), strides=(1, 1), padding='same', activation='relu')(x)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x)

x = InceptionModule(x, 64)
x = InceptionModule(x, 120)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x)

x = InceptionModule(x, 128)
x = InceptionModule(x, 128)
x = InceptionModule(x, 128)
x = InceptionModule(x, 132)
x = InceptionModule(x, 208)
x = MaxPooling2D(pool_size=(3, 3), strides=(2, 2), padding='same')(x)

x = InceptionModule(x, 208)
x = InceptionModule(x, 256)
x = AveragePooling2D(pool_size=(7, 7), strides=(7, 7), padding='same')(x)

x = Dropout(0.4)(x)
x = Flatten()(x)
outputs = Dense(1000, activation='softmax')(x)

model = Model(inputs=inputs, outputs=outputs)

四、转移学习

在实际应用中,考虑到数据量和算力的问题,可通过利用已有训练好的模型进行微调,即转移学习。将已有模型的前N层冻结,只对模型后面几层进行微调,加速模型训练过程,提高模型准确率。代码如下:

from keras.applications.inception_v3 import InceptionV3
from keras.layers import Dense, GlobalAveragePooling2D
from keras.models import Model
from keras import backend as K

K.clear_session()

base_model = InceptionV3(weights='imagenet', include_top=False, input_shape=(299, 299, 3))
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(1000, activation='softmax')(x)

# 冻结前面249层,只训练后面的层
for layer in base_model.layers[:249]:
    layer.trainable = False

model = Model(inputs=base_model.input, outputs=predictions)

五、优化和预处理

在训练GoogleNet时,常用的优化器为SGD(随机梯度下降),使得模型参数更好的收敛。在预处理方面,ImageNet中的图像分为ILSVRC2014-train和ILSVRC2014-val两个数据集。在训练时,对于ILSVRC2014-train数据集中的图像,进行数据增强,如左右翻转、随机裁剪等预处理方式,以增强模型的鲁棒性和泛化能力。在预测时,对于图像进行中心化和大小正则化处理。

六、总结

GoogleNet以其高效的模型架构和并行计算Inception模块的特点,被广泛应用在图像识别领域。在实际应用中,通过微调已有的训练好的模型,可以有效提高模型的准确率和加速训练过程。