[functorch] removed unnecessary note
diff --git a/functorch/examples/ensembling/parallel_train.py b/functorch/examples/ensembling/parallel_train.py
index d9d9a75..f8a90e6 100644
--- a/functorch/examples/ensembling/parallel_train.py
+++ b/functorch/examples/ensembling/parallel_train.py
@@ -16,8 +16,6 @@
# GOAL: Demonstrate that it is possible to use eager-mode vmap
# to parallelize training over models.
-# NB: this code runs off of a branch on zou3519/pytorch:dynlayer
-
DEVICE = 'cpu'
# Step 1: Make some spirals
@@ -110,7 +108,7 @@
step6()
# Step 7: Now, the flaw with step 6 is that we were training on the same exact
-# data. This can lead to all of the models in the ensemble overfitting in the
+# data. This can lead to all of the models in the ensemble overfitting in the
# same way. The solution that http://willwhitney.com/parallel-training-jax.html
# applies is to randomly subset the data in a way that the models do not recieve
# exactly the same data in each training step!