Skip to content

fix: run model in eval mode with inference_mode for predictions in ce…#1267

Open
bandomatteo wants to merge 1 commit into
mrdbourke:mainfrom
bandomatteo:fix/model-eval-inference-cell13
Open

fix: run model in eval mode with inference_mode for predictions in ce…#1267
bandomatteo wants to merge 1 commit into
mrdbourke:mainfrom
bandomatteo:fix/model-eval-inference-cell13

Conversation

@bandomatteo
Copy link
Copy Markdown

What was changed

  • Updated cell 13 in 02_pytorch_classification.ipynb to:
    • Set the model to eval() mode before prediction
    • Wrap prediction code with torch.inference_mode()

Why this change

Previously, predictions were made while the model was still in training mode and gradients were tracked, which could lead to:

  • Inconsistent outputs during inference
  • Unnecessary memory and compute usage

By setting the model to evaluation mode and using inference_mode, predictions are now:

  • Consistent with expected inference behavior
  • More efficient (no gradient tracking)
@bandomatteo
Copy link
Copy Markdown
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

1 participant