Follow-up to #355 / #366, deferred from that PR because it needs a larger restructure.
Problem
In 31_image_classification.ipynb, Trainer (__init__ + fit + predict) is currently a ~60-line training-loop snippet shown in a markdown cell that the student is expected to paste into an empty class skeleton. The goal is the high-level pipeline (device → hyperparameters → loss → model → optimizer → fit), not re-deriving PyTorch boilerplate, that the students will most likely copy & paste.
Proposed change
Ship Trainer already implemented, with inline comments only on the parts that matter pedagogically:
model.train() / model.eval() mode switching and why (batchnorm, dropout)
zero_grad() → loss.backward() → optimizer.step() as one training step
- Early-stopping bookkeeping:
best_val_loss, snapshotting state_dict(), reloading in predict
.to(device) placements
Bookkeeping (log file, loss accumulation, tensor -> NumPy) stays uncommented. The surrounding markdown should shrink to a short conceptual description pointing at the implementation cell.
Why a new issue?
The "Classes" section currently walks through ImageDataset, ImageClassifier, and Trainer in a consistent fill-in-the-blanks style. Changing only Trainer breaks that pattern. The fit/predict markdown also needs rewriting (currently structured to teach writing the loop, not reading it), not just trimming.
Follow-up to #355 / #366, deferred from that PR because it needs a larger restructure.
Problem
In
31_image_classification.ipynb,Trainer(__init__+fit+predict) is currently a ~60-line training-loop snippet shown in a markdown cell that the student is expected to paste into an empty class skeleton. The goal is the high-level pipeline (device → hyperparameters → loss → model → optimizer → fit), not re-deriving PyTorch boilerplate, that the students will most likely copy & paste.Proposed change
Ship
Traineralready implemented, with inline comments only on the parts that matter pedagogically:model.train()/model.eval()mode switching and why (batchnorm, dropout)zero_grad()→loss.backward()→optimizer.step()as one training stepbest_val_loss, snapshottingstate_dict(), reloading inpredict.to(device)placementsBookkeeping (log file, loss accumulation, tensor -> NumPy) stays uncommented. The surrounding markdown should shrink to a short conceptual description pointing at the implementation cell.
Why a new issue?
The "Classes" section currently walks through
ImageDataset,ImageClassifier, andTrainerin a consistent fill-in-the-blanks style. Changing onlyTrainerbreaks that pattern. Thefit/predictmarkdown also needs rewriting (currently structured to teach writing the loop, not reading it), not just trimming.