[dynamo] Raise accumulated cache size limit (#122130)
Fixes #114511
This was raised by IBM folks where the a LLM compile was failing because it had more than 64 layers.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122130
Approved by: https://github.com/Chillee, https://github.com/jansel
ghstack dependencies: #121954, #122005
diff --git a/torch/_dynamo/config.py b/torch/_dynamo/config.py
index 3179381..c1793b8 100644
--- a/torch/_dynamo/config.py
+++ b/torch/_dynamo/config.py
@@ -39,8 +39,8 @@
# [@compile_ignored: runtime_behaviour]
cache_size_limit = 8
-# [@compile_ignored: runtime_behaviour] controls the maximum number of entries for a code object.
-accumulated_cache_size_limit = 64
+# [@compile_ignored: runtime_behaviour] safeguarding to prevent horrible recomps
+accumulated_cache_size_limit = 256
# whether or not to specialize on int inputs. This only has an effect with
# dynamic_shapes; when dynamic_shapes is False, we ALWAYS specialize on int