tune down batch-size for res2net to avoid OOM (#122977)

The batch-size for this model is 64 previously. Later on we change that to 256 and cause OOM in cudagraphs setting. This PR tune the batch size down to 128.

Share more logs from my local run
```
cuda,res2net101_26w_4s,128,1.603578,110.273572,335.263494,1.042566,11.469964,11.001666,807,2,7,6,0,0
cuda,res2net101_26w_4s,256,1.714980,207.986155,344.013071,1.058278,22.260176,21.034332,807,2,7,6,0,0
```

The log shows that torch.compile uses 11GB for 128 batch size and 21GB for 256 batch size. I guess the benchmark script has extra overhead cause the model OOM for 256 batch size in the dashboard run.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/122977
Approved by: https://github.com/Chillee
diff --git a/benchmarks/dynamo/timm_models_list.txt b/benchmarks/dynamo/timm_models_list.txt
index 91d897d..0c13a8c 100644
--- a/benchmarks/dynamo/timm_models_list.txt
+++ b/benchmarks/dynamo/timm_models_list.txt
@@ -39,7 +39,7 @@
 poolformer_m36 128
 regnety_002 1024
 repvgg_a2 128
-res2net101_26w_4s 256
+res2net101_26w_4s 128
 res2net50_14w_8s 128
 res2next50 128
 resmlp_12_224 128