编程语言
首页 > 编程语言> > python-机器学习中的高损失的恒定验证准确性

python-机器学习中的高损失的恒定验证准确性

作者:互联网

我目前正在尝试使用具有2个类的Inception V3创建图像分类模型.我有1428张图像,大约平衡了70/30.当我运行模型时,我会得到很高的损失以及持续的验证准确性.是什么导致此恒定值?

data = np.array(data, dtype="float")/255.0
labels = np.array(labels,dtype ="uint8")

(trainX, testX, trainY, testY) = train_test_split(
                            data,labels, 
                            test_size=0.2, 
                            random_state=42) 

img_width, img_height = 320, 320 #InceptionV3 size

train_samples =  1145 
validation_samples = 287
epochs = 20

batch_size = 32

base_model = keras.applications.InceptionV3(
        weights ='imagenet',
        include_top=False, 
        input_shape = (img_width,img_height,3))

model_top = keras.models.Sequential()
model_top.add(keras.layers.GlobalAveragePooling2D(input_shape=base_model.output_shape[1:], data_format=None)),
model_top.add(keras.layers.Dense(350,activation='relu'))
model_top.add(keras.layers.Dropout(0.2))
model_top.add(keras.layers.Dense(1,activation = 'sigmoid'))
model = keras.models.Model(inputs = base_model.input, outputs = model_top(base_model.output))


for layer in model.layers[:30]:
  layer.trainable = False

model.compile(optimizer = keras.optimizers.Adam(
                    lr=0.00001,
                    beta_1=0.9,
                    beta_2=0.999,
                    epsilon=1e-08),
                    loss='binary_crossentropy',
                    metrics=['accuracy'])

#Image Processing and Augmentation 
train_datagen = keras.preprocessing.image.ImageDataGenerator(
          zoom_range = 0.05,
          #width_shift_range = 0.05, 
          height_shift_range = 0.05,
          horizontal_flip = True,
          vertical_flip = True,
          fill_mode ='nearest') 

val_datagen = keras.preprocessing.image.ImageDataGenerator()


train_generator = train_datagen.flow(
        trainX, 
        trainY,
        batch_size=batch_size,
        shuffle=True)

validation_generator = val_datagen.flow(
                testX,
                testY,
                batch_size=batch_size)

history = model.fit_generator(
    train_generator, 
    steps_per_epoch = train_samples//batch_size,
    epochs = epochs, 
    validation_data = validation_generator, 
    validation_steps = validation_samples//batch_size,
    callbacks = [ModelCheckpoint])

这是我运行模型时的日志:

Epoch 1/20
35/35 [==============================]35/35[==============================] - 52s 1s/step - loss: 0.6347 - acc: 0.6830 - val_loss: 0.6237 - val_acc: 0.6875

Epoch 2/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6364 - acc: 0.6756 - val_loss: 0.6265 - val_acc: 0.6875

Epoch 3/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6420 - acc: 0.6743 - val_loss: 0.6254 - val_acc: 0.6875

Epoch 4/20
35/35 [==============================]35/35 [==============================] - 14s 414ms/step - loss: 0.6365 - acc: 0.6851 - val_loss: 0.6289 - val_acc: 0.6875

Epoch 5/20
35/35 [==============================]35/35 [==============================] - 14s 411ms/step - loss: 0.6359 - acc: 0.6727 - val_loss: 0.6244 - val_acc: 0.6875

Epoch 6/20
35/35 [==============================]35/35 [==============================] - 15s 415ms/step - loss: 0.6342 - acc: 0.6862 - val_loss: 0.6243 - val_acc: 0.6875

解决方法:

我认为您的学习率太低,历时太少.尝试使用lr = 0.001和纪元= 100.

标签:tensorflow,keras,machine-learning,deep-learning,python
来源: https://codeday.me/bug/20191025/1924756.html