Skip to content

chore(deps): update machine-learning#28239

Open
renovate[bot] wants to merge 1 commit into
mainfrom
renovate/machine-learning
Open

chore(deps): update machine-learning#28239
renovate[bot] wants to merge 1 commit into
mainfrom
renovate/machine-learning

Conversation

@renovate
Copy link
Copy Markdown
Contributor

@renovate renovate Bot commented May 5, 2026

This PR contains the following updates:

Package Change Age Confidence Type Update Pending
fastapi (changelog) 0.136.00.136.1 age confidence project.dependencies patch
huggingface-hub 1.13.01.14.0 age confidence project.dependencies minor
mypy (changelog) 1.20.11.20.2 age confidence dependency-groups patch
onnxruntime 1.24.41.25.1 age confidence project.optional-dependencies minor 1.26.0
onnxruntime-gpu 1.24.41.25.1 age confidence project.optional-dependencies minor 1.26.0
onnxruntime-migraphx 1.24.21.25.0 age confidence project.optional-dependencies minor
orjson (changelog) 3.11.83.11.9 age confidence project.dependencies patch
pydantic (changelog) 2.12.52.13.4 age confidence project.dependencies minor
pydantic-settings (changelog) 2.13.12.14.0 age confidence project.dependencies minor 2.14.1
python d168b8dd49c1ff stage digest
python 9c6f908cd67330 stage digest
python 970c99f99f4240 stage digest
ruff (source, changelog) 0.15.100.15.12 age confidence dependency-groups patch
tokenizers 0.22.20.23.1 age confidence project.dependencies minor
types-requests (changelog) 2.33.0.202604082.33.0.20260503 age confidence dependency-groups patch 2.33.0.20260508
uvicorn (changelog) 0.44.00.46.0 age confidence project.dependencies minor

Release Notes

fastapi/fastapi (fastapi)

v0.136.1

Compare Source

Upgrades
Internal
huggingface/huggingface_hub (huggingface-hub)

v1.14.0: [v1.14.0] Handle Spaces secrets & variables from CLI and other improvements

Compare Source

🖥️ Manage Space secrets and variables from the CLI

You can now manage Space secrets and environment variables directly from the command line with two new hf spaces subgroups: secrets and variables. Use hf spaces secrets to add, list, and delete write-only secrets, and hf spaces variables to add, list, and delete readable environment variables. Both add commands support multiple -s/-e flags and --secrets-file/-env-file for loading from dotenv files. On the Python side, HfApi.get_space_secrets() returns secret metadata (key, description, updated timestamp) without ever revealing values.

# List secrets (values are write-only — only keys and timestamps are shown)
$ hf spaces secrets ls username/my-space

# Add secrets
$ hf spaces secrets add username/my-space -s OPENAI_API_KEY=sk-...
$ hf spaces secrets add username/my-space --secrets-file .env.secrets

# Delete a secret (confirmation prompt, use --yes to skip)
$ hf spaces secrets delete username/my-space OPENAI_API_KEY --yes

# List, add, and delete variables (values are readable)
$ hf spaces variables ls username/my-space
$ hf spaces variables add username/my-space -e MODEL_ID=gpt2 -e MAX_TOKENS=512
$ hf spaces variables delete username/my-space MAX_TOKENS --yes

📚 Documentation: CLI guide · Manage your Space

🪣 Rsync-style trailing slash for bucket folder copies

hf buckets cp now supports rsync-style trailing slash semantics when copying folders. A trailing / on the source path copies only the folder's contents to the destination, while omitting it nests the folder itself — matching the behavior you'd expect from rsync. This makes it possible to flatten directory structures during copies, which was not possible before. Additionally, copy_files now raises an explicit EntryNotFoundError when the source path resolves to no files, instead of silently succeeding with zero operations.

# Without trailing slash: "logs" dir is nested => dst/logs/...
$ hf buckets cp hf://buckets/username/src-bucket/logs hf://buckets/username/dst/

# With trailing slash: only contents of "logs" are copied => dst/...
$ hf buckets cp hf://buckets/username/src-bucket/logs/ hf://buckets/username/dst/

📚 Documentation: Buckets guide · CLI guide

💔 Breaking Change

  • [CLI] Rename hf skills upgrade -> hf skills update by @​hanouticelina in #​4176hf skills upgrade no longer exists; use hf skills update instead.
  • [CLI] Add out.status() by @​hanouticelina in #​4171 — status updates (spinners/progress) on hf extensions install and hf spaces dev-mode are now suppressed when using --format json, --quiet, or --format agent.

🖥️ CLI

🐛 Bug and typo fixes

🏗️ Internal

python/mypy (mypy)

v1.20.2

Compare Source

ijl/orjson (orjson)

v3.11.9

Compare Source

Changed
  • Build now depends on Rust 1.95 or later instead of 1.89.
Fixed
  • Fix building on Rust 1.95.
pydantic/pydantic (pydantic)

v2.13.4: 2026-05-06

Compare Source

v2.13.4 (2026-05-06)

What's Changed
Packaging
Fixes

Full Changelog: pydantic/pydantic@v2.13.3...v2.13.4

v2.13.3

Compare Source

GitHub release

What's Changed
Fixes

v2.13.2

Compare Source

v2.13.1: 2026-04-15

Compare Source

v2.13.1 (2026-04-15)

What's Changed
Fixes

Full Changelog: pydantic/pydantic@v2.13.0...v2.13.1

v2.13.0

Compare Source

GitHub release

The highlights of the v2.13 release are available in the blog post.
Several minor changes (considered non-breaking changes according to our versioning policy)
are also included in this release. Make sure to look into them before upgrading.

This release contains the updated pydantic.v1 namespace, matching version 1.10.26 which includes support for Python 3.14.

What's Changed

See the beta releases for all changes sinces 2.12.

New Features
  • Allow default factories of private attributes to take validated model data by @​Viicos in #​13013
Changes
Fixes
  • Change type of Any when synthesizing _build_sources for BaseSettings.__init__() signature in the mypy plugin by @​Viicos in #​13049
  • Fix model equality when using runtime extra configuration by @​Viicos in #​13062
Packaging
New Contributors
pydantic/pydantic-settings (pydantic-settings)

v2.14.0

Compare Source

What's Changed

New Contributors

Full Changelog: pydantic/pydantic-settings@v2.13.1...v2.14.0

astral-sh/ruff (ruff)

v0.15.12

Compare Source

Released on 2026-04-24.

Preview features
  • Implement #ruff:file-ignore file-level suppressions (#​23599)
  • Implement #ruff:ignore logical-line suppressions (#​23404)
  • Revert preview changes to displayed diagnostic severity in LSP (#​24789)
  • [airflow] Implement task-branch-as-short-circuit (AIR004) (#​23579)
  • [flake8-bugbear] Fix break/continue handling in loop-iterator-mutation (B909) (#​24440)
  • [pylint] Fix PLC2701 for type parameter scopes (#​24576)
Rule changes
  • [pandas-vet] Suggest .array as well in PD011 (#​24805)
CLI
  • Respect default Unix permissions for cache files (#​24794)
Documentation
  • [pylint] Fix PLR0124 description not to claim self-comparison always returns the same value (#​24749)
  • [pyupgrade] Expand docs on reusable TypeVars and scoping (UP046) (#​24153)
  • Improve rules table accessibility (#​24711)
Contributors

v0.15.11

Compare Source

Released on 2026-04-16.

Preview features
  • [ruff] Ignore RUF029 when function is decorated with asynccontextmanager (#​24642)
  • [airflow] Implement airflow-xcom-pull-in-template-string (AIR201) (#​23583)
  • [flake8-bandit] Fix S103 false positives and negatives in mask analysis (#​24424)
Bug fixes
  • [flake8-async] Omit overridden methods for ASYNC109 (#​24648)
Documentation
  • [flake8-async] Add override mention to ASYNC109 docs (#​24666)
  • Update Neovim config examples to use vim.lsp.config (#​24577)
Contributors
huggingface/tokenizers (tokenizers)

v0.23.1

Compare Source

TL;DR

tokenizers 0.23.1 is the first proper stable release in the 0.23 line — 0.23.0 only ever shipped as rc0 because the release pipeline itself was broken (Node side hadn't shipped multi-platform binaries since 2023, Python side was on pyo3 0.27 without free-threaded support). 0.23.1 is the version where everything actually goes out the door together: full Node multi-platform wheels for the first time in years, Python 3.14 (regular and free-threaded 3.14t), full type hints for every Python class, and a stack of measurable perf wins on the BPE / added-vocab hot paths.

There is no functional 0.23.0 published — we tag 0.23.1 directly so users don't accidentally pull a never-shipped version.


🚨 Breaking changes

  • Drop Python 3.9 (#​1952) — requires-python = ">=3.10"; 3.9 users stay on 0.22.x.
  • add_tokens normalizes content at insertion (#​1995) — re-saved tokenizer.json may differ in the added_tokens block. Existing files load unchanged.
  • Type stubs are precise (#​1928, #​1997) — methods that returned Any now return real types; mypy --strict may surface previously-hidden errors. Stub layout also moved from tokenizers/<sub>/__init__.pyi to tokenizers/<sub>.pyi. This breaks the surface of some of the processors like RobertaProcessign's __init__ .
  • 3.14t-only: setters/getters return PyResult<T> because of Arc<RwLock<Tokenizer>>; a poisoned lock surfaces as PyException instead of a panic.

⚡ Performance — measured locally on this Mac, not lifted from PRs

Run with cargo bench --bench <name> -- --save-baseline v0_22_2 on v0.22.2, then --baseline v0_22_2 on v0.23.1. Numbers are point-in-time wall clock on a single laptop; relative deltas are what matters, absolute numbers will differ on CI hardware.

Added-vocabulary deserialize — the headline win (#​1995, #​1999)

bench: improve added_vocab_deserialize to reflect real-world workloads (#​2000) is now representative of how transformers actually loads tokenizer.json files. The combined effect of daachorse for the matching automaton plus the normalize-on-insert refactor is enormous on this workload:

benchmark v0.22.2 v0.23.1 change
100k tokens, special, no norm ~410 ms 248 ms −40%
100k tokens, non-special, no norm ~7.1 s 273 ms −96%
100k tokens, special, NFKC ~395 ms 235 ms −40%
100k tokens, non-special, NFKC ~7.4 s 290 ms −96%
400k tokens, special, no norm ~15 s 980 ms −94%

Real-world impact: loading a Llama-3-style tokenizer with a large set of added tokens dropped from "noticeable pause" to "instant".

BPE encode
benchmark v0.22.2 v0.23.1 change
BPE GPT2 encode batch, no cache 530 ms 446 ms −16%
BPE GPT2 encode batch (cached) 690 ms 685 ms noise
BPE GPT2 encode (single) 1.95 s 1.94 s noise
BPE Train (small) 32.6 ms 31.5 ms −3%
BPE Train (big) 1.01 s 988 ms −2%

The BPE per-thread cache PR (#​2028) shows much larger wins on highly-parallel workloads (+47–62% at 88+ threads on a server box, per the PR's own measurements on Vera). Single-thread batch numbers above are flat or slightly improved because cache-hit overhead was already low without contention.

Llama-3 encode
benchmark v0.22.2 v0.23.1 change
llama3-encode (single) 2.10 s 2.02 s −4%
llama3-batch 438 ms 408 ms −7%
llama3-offsets 410 ms 395 ms −4%
Truncation early exit (#​1990)

Right-direction truncation no longer pre-tokenizes past max_length. The new truncation_benchmark doesn't exist on v0.22.2 so there's no apples-to-apples here, but the PR's own measurements on the same machine showed −20–28% across a range of max_length values for right-truncation; left-truncation unchanged.

Other perf improvements (no direct comparable bench)
  • BPE::Builder::build no longer formats strings in a hot loop (#​2010) — ~45% faster Tokenizer::from_file on Llama-3 in the PR's profile.
  • BPE per-thread cache (#​2028) — see Vera numbers in PR description for parallel scale-out.

🔄 Serialization / deserialization

The tokenizer.json format is forward-compatible: existing files load on 0.23 unchanged. Two things to know if you re-save:

  • added_tokens entries created via add_tokens(..., normalized=True) will have their content normalized at save time — see breaking-change note above.
  • tokenizer.train(...) no longer keeps a redundant added_tokens/special_tokens Vec separate from the added_tokens_map_r. Public API surface unchanged; only the internal struct shape moved.

bench: improve added_vocab_deserialize to reflect real-world workloads (#​2000) lands a more realistic micro-benchmark for this surface; if you're tracking deserialize perf in your own CI, the new bench is the one to compare against.


🐍 Python: free-threaded 3.14t support

Dedicated wheels for python3.14t (the free-threaded build introduced in PEP 703). The wheel:

  • Declares Py_MOD_GIL_NOT_USED, so importing tokenizers does not force the GIL back on.
  • Builds without the abi3 cargo feature (free-threaded Python doesn't expose the limited API).
  • Goes through Arc<RwLock<Tokenizer>> for the inner state so concurrent setters and encoders don't race PyO3's per-pyclass borrow check.

A new stress-test module tests/test_freethreaded.py exercises N-encoder × M-setter races on a single Tokenizer and asserts no RuntimeError: Already borrowed, no RwLock poisoning, and that sys._is_gil_enabled() is False post-import.

For the regular CPython wheel everything is unchanged.


📦 Node.js bindings: first proper multi-platform release since 2023

The npm package now ships 13 platforms (macOS x64/arm64/universal, Windows x64/i686/arm64, Linux x64/arm64/armv7 in both glibc and musl, Android arm64/armv7) — previous workflows only built 3 of those, leaving Apple Silicon / Linux ARM / Alpine users with package-not-found errors since 2023 (#​1365, #​1703, #​1922). Fixed via #​1970 + #​2034, which also bumps @napi-rs/cli to v3 and switches cross-builds to cargo-zigbuild.


🧷 Type hints & typing for all classes (#​1928, #​1997)

Every class in the python bindings now ships proper .pyi stubs — Tokenizer, AddedToken, Encoding, every decoder / model / normalizer / pre-tokenizer / processor / trainer. Editors and type checkers (mypy, pyright, ty) see real signatures with types and docstrings instead of falling back to Any.

The stubs are generated automatically from the compiled extension via tools/stub-gen (Rust binary using pyo3-introspection). Re-running make style regenerates them; CI guards against regenerated-vs-checked-in drift. If the generator ever returns 0 docstrings (e.g. because the [patch.crates-io] pin in .cargo/config.toml falls out of sync with the pyo3 dep version), it now hard-aborts with a precise diagnostic instead of silently emitting bare-bones stubs.

>>> from tokenizers import Tokenizer
>>> # IDEs now resolve every method, every kwarg, every return type
>>> Tokenizer.from_pretrained("bert-base-cased")

⚠️ As called out in breaking changes: stricter type info means previously-hidden type errors in user code may now surface under mypy --strict.


✨ Other features

  • Unigram sampling: models.Unigram now exposes alpha and nbest_size for subword regularization (parity with Google's implementation, #​1994). Closes long-standing requests #​730 and #​849.
  • Weakref support on Tokenizer (#​1958) — useful for long-lived caches that don't want to keep tokenizers alive.
  • CI benchmark regression detection on PRs (#​2013) — every PR runs ci_benchmark against the stored baseline and posts a comparison chart to the PR.
  • Longer-context Llama-3 benchmarks (#​1971) for tracking head-room on multi-thousand-token inputs.

🛠 Other fixes

  • EncodingVisualizer: unclosed annotation span fixed (#​1911), HTML escape applied to output (#​1937).
  • DecodeStream: __copy__ / __deepcopy__ (#​1930).
  • Pre-tokenize: removed an unnecessary to_vec() from slice (#​1964).
  • Replace wget / norvig URL with HF Hub downloads in test data fetch (#​2018).
  • uv support in the Python Makefile (#​1977).
  • Several security-pin bumps on workflow SHAs (#​2004, #​2005, #​2006, #​2016, #​2017).

👥 Contributors

Thanks to everyone who shipped commits between v0.22.2 and v0.23.1:

@​ArthurZucker, @​finnagin, @​gordonmessmer, @​jberg5, @​kennethsible, @​llukito, @​MayCXC, @​McPatate, @​michaelfeil, @​mrkm4ntr, @​musicinmybrain, @​ngoldbaum, @​OhashiReon, @​paulinebm, @​podarok, @​rtrompier, @​sebpop, @​Shivam-Bhardwaj, @​threexc, @​wheynelau, @​xanderlent — plus @​dependabot and @​hf-security-analysis for keeping pins fresh.


Full Changelog: huggingface/tokenizers@v0.22.2...v0.23.1

Kludex/uvicorn (uvicorn)

v0.46.0: Version 0.46.0

Compare Source

What's Changed

Full Changelog: Kludex/uvicorn@0.45.0...0.46.0

v0.45.0: Version 0.45.0

Compare Source

What's Changed

New Contributors

Full Changelog: Kludex/uvicorn@0.44.0...0.45.0


Configuration

📅 Schedule: (UTC)

  • Branch creation
    • "before 9am on tuesday"
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate Bot requested a review from mertalev as a code owner May 5, 2026 02:56
@renovate renovate Bot added changelog:skip dependencies Pull requests that update a dependency file renovate labels May 5, 2026
@renovate renovate Bot force-pushed the renovate/machine-learning branch 3 times, most recently from e4f21d1 to d621a26 Compare May 6, 2026 18:04
@renovate
Copy link
Copy Markdown
Contributor Author

renovate Bot commented May 6, 2026

⚠️ Artifact update problem

Renovate failed to update artifacts related to this branch. You probably do not want to merge this PR as-is.

♻ Renovate will retry this branch, including artifacts, only when one of the following happens:

  • any of the package files in this branch needs updating, or
  • the branch becomes conflicted, or
  • you click the rebase/retry checkbox if found above, or
  • you rename this PR's title to start with "rebase!" to trigger it manually

The artifact failure details are included below:

File name: machine-learning/pyproject.toml
Artifact update for pydantic-settings resolved to version 2.14.1, which is a pending version that has not yet passed the Minimum Release Age threshold.
Renovate was attempting to update to 2.14.0
This is (likely) not a bug in Renovate, but due to the way your project pins dependencies, _and_ how Renovate calls your package manager to update them.
Until Renovate supports specifying an exact update to your package manager (https://github.com/renovatebot/renovate/issues/41624), it is recommended to directly pin your dependencies (with `rangeStrategy=pin` for apps, or `rangeStrategy=widen` for libraries)
See also: https://docs.renovatebot.com/dependency-pinning/
File name: machine-learning/pyproject.toml
Artifact update for types-requests resolved to version 2.33.0.20260508, which is a pending version that has not yet passed the Minimum Release Age threshold.
Renovate was attempting to update to 2.33.0.20260503
This is (likely) not a bug in Renovate, but due to the way your project pins dependencies, _and_ how Renovate calls your package manager to update them.
Until Renovate supports specifying an exact update to your package manager (https://github.com/renovatebot/renovate/issues/41624), it is recommended to directly pin your dependencies (with `rangeStrategy=pin` for apps, or `rangeStrategy=widen` for libraries)
See also: https://docs.renovatebot.com/dependency-pinning/
File name: machine-learning/pyproject.toml
Artifact update for onnxruntime resolved to version 1.26.0, which is a pending version that has not yet passed the Minimum Release Age threshold.
Renovate was attempting to update to 1.25.1
This is (likely) not a bug in Renovate, but due to the way your project pins dependencies, _and_ how Renovate calls your package manager to update them.
Until Renovate supports specifying an exact update to your package manager (https://github.com/renovatebot/renovate/issues/41624), it is recommended to directly pin your dependencies (with `rangeStrategy=pin` for apps, or `rangeStrategy=widen` for libraries)
See also: https://docs.renovatebot.com/dependency-pinning/
File name: machine-learning/pyproject.toml
Artifact update for onnxruntime-gpu resolved to version 1.26.0, which is a pending version that has not yet passed the Minimum Release Age threshold.
Renovate was attempting to update to 1.25.1
This is (likely) not a bug in Renovate, but due to the way your project pins dependencies, _and_ how Renovate calls your package manager to update them.
Until Renovate supports specifying an exact update to your package manager (https://github.com/renovatebot/renovate/issues/41624), it is recommended to directly pin your dependencies (with `rangeStrategy=pin` for apps, or `rangeStrategy=widen` for libraries)
See also: https://docs.renovatebot.com/dependency-pinning/
File name: machine-learning/pyproject.toml
Artifact update for onnxruntime resolved to version 1.26.0, which is a pending version that has not yet passed the Minimum Release Age threshold.
Renovate was attempting to update to 1.25.1
This is (likely) not a bug in Renovate, but due to the way your project pins dependencies, _and_ how Renovate calls your package manager to update them.
Until Renovate supports specifying an exact update to your package manager (https://github.com/renovatebot/renovate/issues/41624), it is recommended to directly pin your dependencies (with `rangeStrategy=pin` for apps, or `rangeStrategy=widen` for libraries)
See also: https://docs.renovatebot.com/dependency-pinning/
File name: machine-learning/pyproject.toml
Artifact update for onnxruntime resolved to version 1.26.0, which is a pending version that has not yet passed the Minimum Release Age threshold.
Renovate was attempting to update to 1.25.1
This is (likely) not a bug in Renovate, but due to the way your project pins dependencies, _and_ how Renovate calls your package manager to update them.
Until Renovate supports specifying an exact update to your package manager (https://github.com/renovatebot/renovate/issues/41624), it is recommended to directly pin your dependencies (with `rangeStrategy=pin` for apps, or `rangeStrategy=widen` for libraries)
See also: https://docs.renovatebot.com/dependency-pinning/

@renovate renovate Bot force-pushed the renovate/machine-learning branch 3 times, most recently from f3514a9 to 3276dda Compare May 9, 2026 00:41
@renovate renovate Bot force-pushed the renovate/machine-learning branch 8 times, most recently from cd7d8a2 to 6bb2af0 Compare May 11, 2026 19:52
@renovate renovate Bot force-pushed the renovate/machine-learning branch from 6bb2af0 to 8bb73d8 Compare May 11, 2026 20:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant