系统相关
首页 > 系统相关> > python – 第二次调用model.fit()时CNTK内存不足错误

python – 第二次调用model.fit()时CNTK内存不足错误

作者:互联网

我正在使用Keras和CNTK(后端)

我的代码是这样的:

def run_han(embeddings_index, fname, opt)
    ...
    sentence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32')
    embedded_sequences = embedding_layer(sentence_input)
    l_lstm = Bidirectional(GRU(GRU_UNITS, return_sequences=True, kernel_regularizer=l2_reg, 
                           implementation=GPU_IMPL))(embedded_sequences)
    l_att = AttLayer(regularizer=l2_reg)(l_lstm)            
    sentEncoder = Model(sentence_input, l_att)

    review_input = Input(shape=(MAX_SENTS, MAX_SENT_LENGTH), dtype='int32')
    review_encoder = TimeDistributed(sentEncoder)(review_input)
    l_lstm_sent = Bidirectional(GRU(GRU_UNITS, return_sequences=True, kernel_regularizer=l2_reg, 
                                implementation=GPU_IMPL))(review_encoder)
    l_att_sent = AttLayer(regularizer=l2_reg)(l_lstm_sent) 
    preds = Dense(n_classes, activation='softmax', kernel_regularizer=l2_reg)(l_att_sent)
    model = Model(review_input, preds)

    model.compile(loss='categorical_crossentropy',
          optimizer=opt, #SGD(lr=0.1, nesterov=True),
          metrics=['acc'])
   ...
   model.fit(x_train[ind,:,:], y_train[ind,:], epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, shuffle=False, 
          callbacks=[cr_result, history, csv_logger], 
          verbose=2,validation_data=(x_test, y_test), class_weight = class_weight)
   ...
    %xdel model
    gc.collect()

当我更改优化器时,我多次调用上面的模型.像这样:

opt = optimizers.RMSprop(lr=0.0001, rho=0.9, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_rms_cw', opt, class_weight)

opt = optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_adadelta_cw', opt, class_weight)

opt =  optimizers.Adagrad(lr=0.01, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_adagrad_cw', opt, class_weight)

当第二次调用model.fit()时,显示内存不足错误

 RuntimeError: CUDA failure 2: out of memory ; GPU=0 ; hostname=USER-PC ; expr=cudaMalloc((void**) &deviceBufferPtr, sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2))
[CALL STACK]
> Microsoft::MSR::CNTK::CudaTimer::  Stop
- Microsoft::MSR::CNTK::CudaTimer::  Stop (x2)
- Microsoft::MSR::CNTK::GPUMatrix<float>::  Resize
- Microsoft::MSR::CNTK::Matrix<float>::  Resize
- Microsoft::MSR::CNTK::DataTransferer::  operator= (x4)
- CNTK::Internal::  UseSparseGradientAggregationInDataParallelSGD
- Microsoft::MSR::CNTK::DataTransferer::  operator=
- CNTK::Internal::  UseSparseGradientAggregationInDataParallelSGD
- CNTK::Function::  Forward
- CNTK::  CreateTrainer
- CNTK::Trainer::  TotalNumberOfSamplesSeen
- CNTK::Trainer::  TrainMinibatch

我以为这是因为第一次运行的内存没有从gpu释放,
所以我在model.fit()之后添加了这个

%xdel model

gc.collect()

但是,错误是一样的.
我无法弄清楚错误的原因.是因为我的Keras代码还是CNTK?

(GTX 1080ti,Window 7,Python 2.7,CNTK 2.2,Jupyter)

解决方法:

这是一个非常烦人的问题,它源于这样一个事实:由于某种原因,编译为在CPU上执行的代码没有被正确地垃圾收集.因此,即使您正在运行垃圾收集器 – 已编译的模型仍在GPU上.为了解决这个问题,你可以尝试一个解决方案here(TLDR:在一个单独的进程中运行训练 – 当进程完成时 – 内存被清除)

标签:python,keras,cntk
来源: https://codeday.me/bug/20190701/1348511.html