Add _native_batch_norm_legit_no_training to core IR (#107732)
Summary: Added due to how common the op is. For performance reasons users may not want to decompose batch_norm op. batch_norm is also part of StableHLO
Test Plan: After adding to IR, we can enable _check_ir_validity in exir.EdgeCompileConfig for models like MV2, MV3, IC3, IC4
Reviewed By: guangy10
Differential Revision: D48576866
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107732
Approved by: https://github.com/manuelcandales, https://github.com/guangy10
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml
index 7ab48b3..da6f45a 100644
--- a/aten/src/ATen/native/native_functions.yaml
+++ b/aten/src/ATen/native/native_functions.yaml
@@ -4111,6 +4111,7 @@
dispatch:
CompositeExplicitAutograd: _batch_norm_legit_no_training
autogen: _native_batch_norm_legit_no_training.out
+ tags: core
- func: _native_batch_norm_legit.out(Tensor input, Tensor? weight, Tensor? bias, Tensor(a!) running_mean, Tensor(b!) running_var, bool training, float momentum, float eps, *, Tensor(d!) out, Tensor(e!) save_mean, Tensor(f!) save_invstd) -> (Tensor(d!), Tensor(e!), Tensor(f!))
dispatch: