Skip to content

v3.1.3

Latest

Choose a tag to compare

@njzjz njzjz released this 19 Mar 09:37
· 62 commits to master since this release
b2c8511

Highlights

This release focuses on two major themes: easier access to pretrained models and the next stage of the PyTorch roadmap. DeePMD-kit can now download built-in pretrained models directly, and the same release series also introduces a new pretrained model, DPA3-Omol-Large, on top of that mechanism. In parallel, we have started building an experimental exportable PyTorch backend based on the Array API, torch.export, and torch.compile, motivated in part by the deprecation of torch.jit.

Beyond these headline items, v3.1.3 expands PyTorch training capabilities with new optimizers and distributed-training support, improves diagnostics and training safety, adds charge-spin and spin-virial related functionality, and continues to strengthen documentation, CI, packaging, and backend consistency across the project.

Try DPA3-Omol-Large in 3 steps:

# Install the latest version of DeePMD-kit (will be available a few days after this release)
curl -fsSL https://dp1s.deepmodeling.com | bash
# Restart the shell, and download the pretrained model
dp pretrained download DPA3-Omol-Large
# Evaluate your training/test data with the pretrained model
dp test -m ~/.cache/deepmd/pretrained/models/DPA3-Omol-Large.pt -s path_to_your_system

Breaking Changes

New Features

Pretrained models and model distribution

Experimental PyTorch backend

  • Add PyTorch support to Array API utilities by @Copilot in #5198
  • Add a new exportable PyTorch backend by @wanghan-iapcm in #5194
  • Provide infrastructure for converting dpmodel classes to PyTorch modules by @wanghan-iapcm in #5204
  • Implement se_t and se_t_tebd descriptors in the experimental PyTorch backend by @wanghan-iapcm in #5208
  • Add energy fitting in the experimental PyTorch backend by @wanghan-iapcm in #5218
  • Add the atomic model in the experimental PyTorch backend by @wanghan-iapcm in #5220
  • Add the full model in the experimental PyTorch backend by @wanghan-iapcm in #5244
  • Auto-generate forward / forward_lower in the torch_module decorator by @Copilot in #5246
  • Add dpa1, dpa2, dpa3, and hybrid descriptors in the experimental PyTorch backend by @wanghan-iapcm in #5248
  • Add DOS, dipole, polar, and property fittings in the experimental PyTorch backend by @wanghan-iapcm in #5254
  • Add dipole, polar, DOS, property, and DP-ZBL models with cross-backend consistency tests by @wanghan-iapcm in #5260
  • Add training infrastructure for the experimental PyTorch backend by @wanghan-iapcm in #5270
  • Implement the .pte inference pipeline with dynamic shapes by @wanghan-iapcm in #5284
  • Implement the energy Hessian model in the experimental PyTorch backend by @wanghan-iapcm in #5287
  • Add DP freeze support and dp test coverage for .pte models by @wanghan-iapcm in #5302
  • Add frozen-model support in the experimental PyTorch backend by @wanghan-iapcm in #5318

PyTorch training, optimization, and scaling

Core functionality and usability

  • Optimize data-modifier calls in deepeval by @ChiahsinChu in #5120
  • Add NaN detection during training by @njzjz in #5135
  • Support Array API learning rates in dpmodel by @njzjz in #5143
  • Reuse dpmodel EnvMatStat in PyTorch by @njzjz in #5139
  • Add device-name display (for example, A100 instead of only cuda) by @OutisLi in #5146
  • Improve capitalization in info display by @OutisLi in #5145
  • Add a Node class for serialization and implement display functionality by @njzjz in #5158
  • Unify learning-rate schedulers with the Array API by @OutisLi in #5154
  • Use data statistics for observed types in PyTorch / dpmodel by @iProzd in #5269
  • Add charge-spin embedding for the DP and PyTorch backends by @iProzd in #5295
  • Add skills for adding new descriptors by @wanghan-iapcm in #5249
  • Add a skill to debug gradient flow in the experimental PyTorch backend by @wanghan-iapcm in #5280

Documentation

Build & Releases

Packaging, dependencies, and release infrastructure

  • Update the Torch requirement from ~=2.8.0 to >=2.8,<2.10 by @dependabot[bot] in #5114
  • Update the Torch requirement from >=2.8,<2.10 to ==2.10.0 by @dependabot[bot] in #5170
  • Update the Torch requirement from ~=2.8.0 to >=2.8,<2.10 by @dependabot[bot] in #5103
  • Update the scikit-build-core requirement to >=0.5,!=0.6.0,<0.13 by @dependabot[bot] in #5271
  • Bump Torch (CPU) to 2.10.0 in CI by @njzjz in #5273
  • Align the package_c TensorFlow image and wheel to 2.20 by @njzjz-bot in #5297
  • Bump tensorflow-cpu from 2.20.0 to 2.21.0 by @dependabot[bot] in #5304
  • Bump the CUDA image to 12.9.1 by @njzjz in #5107
  • Use disk-space-reclaimer in the build_docker workflow by @Copilot in #5242
  • Trust the PaddlePaddle host in workflows to bypass a TLS outage by @njzjz-bot in #5305
  • Bump docker/login-action from 3 to 4 by @dependabot[bot] in #5289
  • Bump docker/build-push-action from 6 to 7 by @dependabot[bot] in #5309
  • Bump docker/metadata-action from 5 to 6 by @dependabot[bot] in #5310
  • Bump pypa/cibuildwheel from 3.3 to 3.4 by @dependabot[bot] in #5290

CI, testing, formatting, and developer tooling

Internal maintenance and backend cleanup

  • Reuse dpmodel NeighborStatOP in PyTorch by @njzjz in #5137
  • Add comprehensive type hints to the Paddle backend and enable ANN rules by @Copilot in #4944
  • Add type annotations to the TensorFlow backend by @Copilot in #4945
  • Remove the unused support_array_api decorator by @Copilot in #5200
  • Use ReLU to speed up consistency unit tests by @njzjz in #5203
  • Refactor the embedding net for dpmodel / pt_expt by @wanghan-iapcm in #5205
  • Consolidate backend logic into array_api.py with generic implementations by @Copilot in #5202
  • Refactor the fitting net for dpmodel / pt_expt by @wanghan-iapcm in #5207
  • Add a decorator to simplify the experimental PyTorch module implementation by @njzjz in #5213
  • Make dpmodel outputs consistent with the PyTorch backend by @wanghan-iapcm in #5250
  • Extend the device lint check to the Array API by @njzjz in #5261
  • Remove unused learning_rate_dict multi-task handling in training by @OutisLi in #5278
  • Sync get_lr from the PyTorch backend to the Paddle backend by @njzjz in #5144
  • Move the input-stat update to model_change_out_bias in PyTorch by @wanghan-iapcm in #5266
  • Add a missing seed for descriptors in examples by @iProzd in #5098
  • Merge master into devel for v3.1.2 by @njzjz in #5095

Pre-commit maintenance

Bugfixes

Full Changelog: v3.1.2...v3.1.3