1. Inference

  • making prediction with trained PyTorch model (inference);

  • 一旦已训练某模型,可用来做预测,在训练/测试循环之外执行此操作的步骤相似;

  • 用Pytorch预测也称执行推理,需记住三件事:

    • 将模型设置为评估模式:
      set model in evaluation mode (model.eval());

    • 使用推理模式上下文管理器进行预测:torch.inference_mode(): ……,
      make prediction via inference mode context manager;

    • 所有预测都应使用同一设备上的对象进行,如仅GPU或仅CPU上的数据和模型;

    • all prediction should be made with object on the same device,
      e.g. data and model on GPU only or data and model on CPU only;

    • 前两项确保PyTorch在训练过程中在幕后使用的所有有用的计算和设置都关闭,
      但这些计算和设置不是推理所必须的,这会导致更快的计算,第三个确保不会遇到跨设备错误;

    • helpful calculation and settings PyTorch use
      behind scene during training but aren’t necessary for
      inference are turn off,run into cross-device error;

  • 现再次使用已训练过的模型进行预测;

# 1:set model in evaluation mode
model_0.eval()

# 2:setup inference mode context manager
with torch.inference_mode():
  # 3:make sure calculation are done with model and data on
  # the same device,in our case,we haven't setup device-agnostic
  # code yet so our data and model are on the CPU by default.
  # model_0.to(device)
  # X_test = X_test.to(device)
  y_pred = model_0(X_test)
print(y_pred)

plot_prediction(prediction=y_pred)

plt.savefig("PredictionWithTrainedModelInference.svg")
tensor([[0.8141],
        [0.8256],
        [0.8372],
        [0.8488],
        [0.8603],
        [0.8719],
        [0.8835],
        [0.8950],
        [0.9066],
        [0.9182]])
PredictionWithTrainedModelInference