Skip to content

Commit fb25ccb

Browse files
author
Han Wang
committed
fix(pt_expt): disable DDPOptimizer to prevent compiled graph splitting
DDPOptimizer splits the inner compiled graph at bucket boundaries, producing subgraph outputs with symbolic integers that crash AOT Autograd (pytorch/pytorch#134182).
1 parent 447a572 commit fb25ccb

1 file changed

Lines changed: 8 additions & 0 deletions

File tree

deepmd/pt_expt/train/training.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,14 @@
2424
import torch
2525
import torch.distributed as dist
2626

27+
# Disable DDPOptimizer: our compile region wraps only the inner compute
28+
# function, not the whole DDP model. DDPOptimizer assumes it owns the
29+
# full model graph and splits at bucket boundaries, producing subgraphs
30+
# whose outputs include symbolic integers. AOT Autograd then crashes
31+
# with ``'int' object has no attribute 'meta'``
32+
# (pytorch/pytorch#134182).
33+
torch._dynamo.config.optimize_ddp = False
34+
2735
from deepmd.dpmodel.common import (
2836
to_numpy_array,
2937
)

0 commit comments

Comments
 (0)