介绍
这是一个基于深度学习的垃圾分类小工程,用深度残差网络构建
软件架构
- 使用深度残差网络resnet50作为基石,在后续添加需要的层以适应不同的分类任务
- 模型的训练需要用生成器将数据集循环写入内存,同时图像增强以泛化模型
- 使用不包含网络输出部分的resnet50权重文件进行迁移学习,只训练我们在5个stage后增加的层
安装教程
- 需要的第三方库主要有tensorflow1.x,keras,opencv,pillow,scikit-learn,numpy
- 安装方式很简单,打开terminal,例如:pip install numpy -i https://pypi.tuna.tsinghua.edu.cn/simple
- 数据集与权重文件比较大,所以没有上传
- 如果环境配置方面有问题或者需要数据集与模型权重文件,可以在评论区说明您的问题,我将远程帮助您
使用说明
- 文件夹theory记录了我在本次深度学习中收获的笔记,与模型训练的控制台打印信息
- 迁移学习需要的初始权重与模型定义文件resnet50.py放在model
- 下训练运行trainnet.py,训练结束会创建models文件夹,并将结果权重garclass.h5写入该文件夹
- datagen文件夹下的genit.py用于进行图像预处理以及数据生成器接口
- 使用训练好的模型进行垃圾分类,运行demo.py
结果演示
cans易拉罐
代码解释
在实际的模型中,我们只使用了resnet50的5个stage,后面的输出部分需要我们自己定制,网络的结构图如下:
stage5后我们的定制网络如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
"""定制resnet后面的层""" def custom(input_size,num_classes,pretrain): # 引入初始化resnet50模型 base_model = ResNet50(weights = pretrain, include_top = False , pooling = None , input_shape = (input_size,input_size, 3 ), classes = num_classes) #由于有预权重,前部分冻结,后面进行迁移学习 for layer in base_model.layers: layer.trainable = False #添加后面的层 x = base_model.output x = layers.GlobalAveragePooling2D(name = 'avg_pool' )(x) x = layers.Dropout( 0.5 ,name = 'dropout1' )(x) #regularizers正则化层,正则化器允许在优化过程中对层的参数或层的激活情况进行惩罚 #对损失函数进行最小化的同时,也需要让对参数添加限制,这个限制也就是正则化惩罚项,使用l2范数 x = layers.Dense( 512 ,activation = 'relu' ,kernel_regularizer = regularizers.l2( 0.0001 ),name = 'fc2' )(x) x = layers.BatchNormalization(name = 'bn_fc_01' )(x) x = layers.Dropout( 0.5 ,name = 'dropout2' )(x) #40个分类 x = layers.Dense(num_classes,activation = 'softmax' )(x) model = Model(inputs = base_model. input ,outputs = x) #模型编译 model. compile (optimizer = "adam" ,loss = 'categorical_crossentropy' ,metrics = [ 'accuracy' ]) return model |
网络的训练是迁移学习过程,使用已有的初始resnet50权重(5个stage已经训练过,卷积层已经能够提取特征),我们只训练后面的全连接层部分,4个epoch后再对较后面的层进行训练微调一下,获得更高准确率,训练过程如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
class Net(): def __init__( self ,img_size,gar_num,data_dir,batch_size,pretrain): self .img_size = img_size self .gar_num = gar_num self .data_dir = data_dir self .batch_size = batch_size self .pretrain = pretrain def build_train( self ): """迁移学习""" model = resnet.custom( self .img_size, self .gar_num, self .pretrain) model.summary() train_sequence, validation_sequence = genit.gendata( self .data_dir, self .batch_size, self .gar_num, self .img_size) epochs = 4 model.fit_generator(train_sequence,steps_per_epoch = len (train_sequence),epochs = epochs,verbose = 1 ,validation_data = validation_sequence, max_queue_size = 10 ,shuffle = True ) #微调,在实际工程中,激活函数也被算进层里,所以总共181层,微调是为了重新训练部分卷积层,同时训练最后的全连接层 layers = 149 learning_rate = 1e - 4 for layer in model.layers[:layers]: layer.trainable = False for layer in model.layers[layers:]: layer.trainable = True Adam = adam(lr = learning_rate, decay = 0.0005 ) model. compile (optimizer = Adam, loss = 'categorical_crossentropy' , metrics = [ 'accuracy' ]) model.fit_generator(train_sequence,steps_per_epoch = len (train_sequence),epochs = epochs * 2 ,verbose = 1 , callbacks = [ callbacks.ModelCheckpoint( './models/garclass.h5' ,monitor = 'val_loss' , save_best_only = True , mode = 'min' ), callbacks.ReduceLROnPlateau(monitor = 'val_loss' , factor = 0.1 ,patience = 10 , mode = 'min' ), callbacks.EarlyStopping(monitor = 'val_loss' , patience = 10 ),], validation_data = validation_sequence,max_queue_size = 10 ,shuffle = True ) print ( 'finish train,look for garclass.h5' ) |
训练结果如下:
"""
loss: 0.7949 - acc: 0.9494 - val_loss: 0.9900 - val_acc: 0.8797
训练用了9小时左右
"""
如果使用更好的显卡,可以更快完成训练
最后
希望大家可以体验到深度学习带来的收获,能和大家学习很开心,更多关于深度学习的资料请关注服务器之家其它相关文章!
原文链接:https://blog.csdn.net/qq_40943760/article/details/106190943