Conversation
There was a problem hiding this comment.
Pull request overview
Adds a new v1.6.0 deployment changeset/sequence to migrate a remote-chain token pool from a lock-release pool to a burn-mint pool when using a HybridWithExternalMinterTokenPool as the hub.
Changes:
- Introduces
MigrateTokenPoolsequence to propose hub remote-pool set updates, group updates, and remote TARsetPool. - Extends hybrid hub pool operations with reads/writes needed by the migration (remote pools, group, locked tokens, add/remove pool, update groups).
- Adds comprehensive changeset-level verification + apply logic and a full test suite for preconditions and proposal construction.
Reviewed changes
Copilot reviewed 4 out of 5 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| chains/evm/deployment/v1_6_0/sequences/migrate_token_pool.go | New on-chain sequence to build batch operations for hub+remote migration steps. |
| chains/evm/deployment/v1_6_0/operations/hybrid_with_external_minter_token_pool/hybrid_with_external_minter_token_pool.go | Adds missing read/write operations used by the migration flow. |
| chains/evm/deployment/v1_6_0/changesets/migrate_token_pool.go | New changeset wiring: config parsing, precondition checks, sequence execution, datastore refs. |
| chains/evm/deployment/v1_6_0/changesets/migrate_token_pool_test.go | New tests covering invalid inputs, preconditions, and apply/proposal behavior. |
| .gitignore | Ignores local agent-related files/directories. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| tarConfigReport, err := cldf_ops.ExecuteOperation(e.OperationsBundle, tar_ops.GetTokenConfig, remoteChain, evm_contract.FunctionInput[common.Address]{ | ||
| ChainSelector: cfg.RemoteChainSelector, | ||
| Address: remoteTARAddress, | ||
| Args: remoteTokenAddress, | ||
| }) |
There was a problem hiding this comment.
VerifyPreconditions reads the TAR token config but only checks Administrator. It should also validate that TokenPool is non-zero and equals either oldRemotePoolAddress or newRemotePoolAddress (matching the sequence’s expectations), so misconfigured TAR state is caught in preconditions instead of failing later during Apply.
| GroupUpdates: []hybrid_pool_binding.HybridTokenPoolAbstractGroupUpdate{ | ||
| { | ||
| RemoteChainSelector: input.RemoteChainSelector, | ||
| Group: input.TargetGroup, | ||
| RemoteChainSupply: input.RemoteChainSupply, | ||
| }, |
There was a problem hiding this comment.
RemoteChainSupply is a pointer and is passed into the UpdateGroups call when the group differs. Add an explicit validation in the sequence (or guard before building the GroupUpdate) to return a clear error if RemoteChainSupply is nil to avoid unexpected ABI packing/panics when this sequence is used outside the changeset verify path.
There was a problem hiding this comment.
I reviewed as much as I could - feel free to have a look!
Quick disclaimer: I'm not an expert on HybridWithExternalMinterTokenPool, so most of my review focuses on coding style / quality 😅 it'd be ideal to have someone fact check the migration logic to be on the safe side
Note for future reviewers: the contract source code for HybridWithExternalMinterTokenPool is here in case you wanna cross-reference the changeset validations with the actual onchain validations
.../operations/hybrid_with_external_minter_token_pool/hybrid_with_external_minter_token_pool.go
Outdated
Show resolved
Hide resolved
.../operations/hybrid_with_external_minter_token_pool/hybrid_with_external_minter_token_pool.go
Outdated
Show resolved
Hide resolved
.../operations/hybrid_with_external_minter_token_pool/hybrid_with_external_minter_token_pool.go
Outdated
Show resolved
Hide resolved
.../operations/hybrid_with_external_minter_token_pool/hybrid_with_external_minter_token_pool.go
Outdated
Show resolved
Hide resolved
chains/evm/deployment/v1_6_0/changesets/migrate_hybrid_pool_remote_test.go
Show resolved
Hide resolved
chains/evm/deployment/v1_6_0/changesets/migrate_hybrid_pool_remote.go
Outdated
Show resolved
Hide resolved
chains/evm/deployment/v1_6_0/changesets/migrate_hybrid_pool_remote.go
Outdated
Show resolved
Hide resolved
chains/evm/deployment/v1_6_0/changesets/migrate_hybrid_pool_remote.go
Outdated
Show resolved
Hide resolved
chains/evm/deployment/v1_6_0/changesets/migrate_hybrid_pool_remote.go
Outdated
Show resolved
Hide resolved
| out, err := cs.Apply(*fixture.env, cfg) | ||
| require.NoError(t, err) | ||
| require.Len(t, out.MCMSTimelockProposals, 1) | ||
| require.NotNil(t, out.DataStore) | ||
|
|
||
| txCounts := txCountByChain(out.MCMSTimelockProposals[0].Operations) | ||
| require.Equal(t, 3, txCounts[fixture.hubSelector]) | ||
| require.Equal(t, 1, txCounts[fixture.remoteSelector]) | ||
| require.True(t, hasAddressRefWithTypeVersion( | ||
| out.DataStore, | ||
| fixture.hubSelector, | ||
| fixture.hubPoolAddress, | ||
| cldf_datastore.ContractType(v1_6_0_hybrid_pool_ops.ContractType), | ||
| v1_6_0_hybrid_pool_ops.Version.String(), | ||
| )) | ||
| require.True(t, hasAddressRefWithTypeVersion( | ||
| out.DataStore, | ||
| fixture.remoteSelector, | ||
| fixture.newRemotePoolAddress, | ||
| cldf_datastore.ContractType(v1_6_0_burn_mint_with_external_minter_token_pool_ops.ContractType), | ||
| v1_6_0_burn_mint_with_external_minter_token_pool_ops.Version.String(), | ||
| )) | ||
|
|
||
| // Proposal-only writes should not mutate state during Apply. | ||
| require.Equal(t, uint8(0), fixture.currentHubGroup(t)) | ||
| require.True(t, containsPoolBytes(fixture.currentHubRemotePools(t), common.LeftPadBytes(fixture.oldRemotePoolAddress.Bytes(), 32))) | ||
| require.False(t, containsPoolBytes(fixture.currentHubRemotePools(t), common.LeftPadBytes(fixture.newRemotePoolAddress.Bytes(), 32))) | ||
| require.Equal(t, fixture.oldRemotePoolAddress, fixture.currentTARPool(t)) |
There was a problem hiding this comment.
Do we have any tests that actually execute the timelock proposal? For example:
chris-de-leon-cll
left a comment
There was a problem hiding this comment.
Overall looks much better 🙂 found a few more things that could use some adjustment, but besides that I'd recommend that we have someone with more context on token pool migrations give it a look before this is merged!
| hubFamily, err := chain_selectors.GetSelectorFamily(cfg.HubChainSelector) | ||
| if err != nil { | ||
| return fmt.Errorf("invalid hub chain selector %d: %w", cfg.HubChainSelector, err) | ||
| } | ||
| if hubFamily != chain_selectors.FamilyEVM { | ||
| return fmt.Errorf("hub chain selector %d is not an EVM chain", cfg.HubChainSelector) | ||
| } | ||
|
|
||
| remoteFamily, err := chain_selectors.GetSelectorFamily(cfg.RemoteChainSelector) | ||
| if err != nil { | ||
| return fmt.Errorf("invalid remote chain selector %d: %w", cfg.RemoteChainSelector, err) | ||
| } | ||
| if remoteFamily != chain_selectors.FamilyEVM { | ||
| return fmt.Errorf("remote chain selector %d is not an EVM chain", cfg.RemoteChainSelector) | ||
| } | ||
|
|
||
| if !e.BlockChains.Exists(cfg.HubChainSelector) { | ||
| return fmt.Errorf("chain with selector %d does not exist", cfg.HubChainSelector) | ||
| } | ||
| if !e.BlockChains.Exists(cfg.RemoteChainSelector) { | ||
| return fmt.Errorf("chain with selector %d does not exist", cfg.RemoteChainSelector) | ||
| } |
There was a problem hiding this comment.
This can be removed since the code on lines 108-115 (e.g. e.BlockChains.EVMChains()) essentially validates the same thing (i.e. selector exists and chain family is EVM)
| hubFamily, err := chain_selectors.GetSelectorFamily(cfg.HubChainSelector) | |
| if err != nil { | |
| return fmt.Errorf("invalid hub chain selector %d: %w", cfg.HubChainSelector, err) | |
| } | |
| if hubFamily != chain_selectors.FamilyEVM { | |
| return fmt.Errorf("hub chain selector %d is not an EVM chain", cfg.HubChainSelector) | |
| } | |
| remoteFamily, err := chain_selectors.GetSelectorFamily(cfg.RemoteChainSelector) | |
| if err != nil { | |
| return fmt.Errorf("invalid remote chain selector %d: %w", cfg.RemoteChainSelector, err) | |
| } | |
| if remoteFamily != chain_selectors.FamilyEVM { | |
| return fmt.Errorf("remote chain selector %d is not an EVM chain", cfg.RemoteChainSelector) | |
| } | |
| if !e.BlockChains.Exists(cfg.HubChainSelector) { | |
| return fmt.Errorf("chain with selector %d does not exist", cfg.HubChainSelector) | |
| } | |
| if !e.BlockChains.Exists(cfg.RemoteChainSelector) { | |
| return fmt.Errorf("chain with selector %d does not exist", cfg.RemoteChainSelector) | |
| } |
| refs := ds.Addresses().Filter( | ||
| cldf_datastore.AddressRefByChainSelector(chainSelector), | ||
| cldf_datastore.AddressRefByAddress(address.Hex()), | ||
| ) | ||
| for _, ref := range refs { | ||
| if ref.Type == expectedType && ref.Version != nil && ref.Version.Equal(expectedVersion) { | ||
| return nil | ||
| } | ||
| if ref.Type != "" || ref.Version != nil { | ||
| return fmt.Errorf( | ||
| "%s %s on chain %d has datastore ref type=%s version=%s, expected type=%s version=%s", | ||
| label, address, chainSelector, | ||
| ref.Type, ref.Version, | ||
| expectedType, expectedVersion, | ||
| ) | ||
| } | ||
| } |
There was a problem hiding this comment.
I believe the datastore API already provides some helpers for you to do all the filtering in one call:
| refs := ds.Addresses().Filter( | |
| cldf_datastore.AddressRefByChainSelector(chainSelector), | |
| cldf_datastore.AddressRefByAddress(address.Hex()), | |
| ) | |
| for _, ref := range refs { | |
| if ref.Type == expectedType && ref.Version != nil && ref.Version.Equal(expectedVersion) { | |
| return nil | |
| } | |
| if ref.Type != "" || ref.Version != nil { | |
| return fmt.Errorf( | |
| "%s %s on chain %d has datastore ref type=%s version=%s, expected type=%s version=%s", | |
| label, address, chainSelector, | |
| ref.Type, ref.Version, | |
| expectedType, expectedVersion, | |
| ) | |
| } | |
| } | |
| refs := ds.Addresses().Filter( | |
| cldf_datastore.AddressRefByChainSelector(chainSelector), | |
| cldf_datastore.AddressRefByAddress(address.Hex()), | |
| cldf_datastore.AddressRefByVersion(expectedVersion), | |
| cldf_datastore.AddressRefByType(expectedType), | |
| ) |
| refs := ds.Addresses().Filter( | ||
| cldf_datastore.AddressRefByChainSelector(chainSelector), | ||
| cldf_datastore.AddressRefByAddress(address.Hex()), | ||
| ) | ||
| for _, ref := range refs { | ||
| if ref.Type == expectedType && ref.Version != nil && ref.Version.Equal(expectedVersion) { | ||
| return true | ||
| } | ||
| } | ||
| return false |
There was a problem hiding this comment.
Same comment as here - the datastore already allows you to filter by type and version so this function may not be totally necessary
| "Migrates a remote chain token pool from lock-release to burn-mint on a hybrid hub pool", | ||
| func(b cldf_ops.Bundle, chains cldf_chain.BlockChains, input MigrateHybridPoolRemoteInput) (sequences.OnChainOutput, error) { | ||
| if input.RemoteChainSupply == nil { | ||
| return sequences.OnChainOutput{}, fmt.Errorf("RemoteChainSupply must not be nil") |
There was a problem hiding this comment.
Style nit: prefer errors.New for static error strings since no formatting is needed:
| return sequences.OnChainOutput{}, fmt.Errorf("RemoteChainSupply must not be nil") | |
| return sequences.OnChainOutput{}, errors.New("RemoteChainSupply must not be nil") |
| var supportedOldPoolTypeAndVersions = map[string]struct{}{ | ||
| v1_5_1_lock_release_token_pool_ops.TypeAndVersion.String(): {}, | ||
| } |
There was a problem hiding this comment.
Looks like this is unused now - should be safe to remove
| var supportedOldPoolTypeAndVersions = map[string]struct{}{ | |
| v1_5_1_lock_release_token_pool_ops.TypeAndVersion.String(): {}, | |
| } |
| v1_5_1_lock_release_token_pool_ops.TypeAndVersion.String(): {}, | ||
| } | ||
|
|
||
| var expectedNewPoolTypeAndVersion = v1_6_0_burn_mint_with_external_minter_token_pool_ops.TypeAndVersion.String() |
There was a problem hiding this comment.
Looks like this can also be safely removed
| var expectedNewPoolTypeAndVersion = v1_6_0_burn_mint_with_external_minter_token_pool_ops.TypeAndVersion.String() |
| ChainSelector: cfg.HubChainSelector, | ||
| Type: cldf_datastore.ContractType(v1_6_0_hybrid_pool_ops.ContractType), | ||
| Version: v1_6_0_hybrid_pool_ops.Version, | ||
| Address: cfg.HubPoolAddress.Hex(), |
There was a problem hiding this comment.
When adding a new ref to the datastore, we should always ensure a qualifier is added
For EVM pools it is customary to use one of two formats:
- Qualifier = the token address for the pool
- Qualifier =
<pool-address>-<pool-type>
| ChainSelector: cfg.RemoteChainSelector, | ||
| Type: cldf_datastore.ContractType(v1_6_0_burn_mint_with_external_minter_token_pool_ops.ContractType), | ||
| Version: v1_6_0_burn_mint_with_external_minter_token_pool_ops.Version, | ||
| Address: cfg.NewRemotePoolAddress.Hex(), |
There was a problem hiding this comment.
Same comment as https://github.com/smartcontractkit/chainlink-ccip/pull/1910/changes#r3019721758 - let's ensure that a qualifier is added here pls
|
No description provided.