[PT2D][DDP] Remove some hacks to get the test work (#123206)

It seems that these bugs are fixed (not sure what PRs) and we don't need to disable the buffer reused.

Differential Revision: [D55657388](https://our.internmc.facebook.com/intern/diff/D55657388/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123206
Approved by: https://github.com/kwen2501, https://github.com/yifuwang
diff --git a/test/distributed/_composable/test_replicate_with_compiler.py b/test/distributed/_composable/test_replicate_with_compiler.py
index e08a9c1..fc58787 100644
--- a/test/distributed/_composable/test_replicate_with_compiler.py
+++ b/test/distributed/_composable/test_replicate_with_compiler.py
@@ -36,8 +36,6 @@
 
 
 DIM = 2000
-# TODO: figure out why buffer reuse conflicts with bucketing
-torch._inductor.config.allow_buffer_reuse = False
 
 
 class Net(nn.Module):
@@ -201,9 +199,7 @@
                 None, ddp_default_hooks.bf16_compress_hook
             )
 
-        self._test_compile(
-            use_gpu=True, no_sync=False, setup_func=setup, no_inductor=True
-        )
+        self._test_compile(use_gpu=True, no_sync=False, setup_func=setup)
 
     @unittest.skipIf(not has_triton(), "Inductor+gpu needs triton and recent GPU arch")
     @skip_if_rocm