Skip to content

feat: lazy TF session init - defer GPU probe until model load#1501

Open
xiaden wants to merge 1 commit intoMTG:masterfrom
xiaden:lazy-tf-init
Open

feat: lazy TF session init - defer GPU probe until model load#1501
xiaden wants to merge 1 commit intoMTG:masterfrom
xiaden:lazy-tf-init

Conversation

@xiaden
Copy link
Copy Markdown

@xiaden xiaden commented Feb 19, 2026

Defer TensorFlow graph/session creation from the constructor to configure().

Description

Previously, TensorFlow session creation occurred in the constructor, causing GPU device probing (and associated log output) when essentia.standard was imported — even if no TensorFlow-based algorithms were used.

This change defers TF object creation until TensorflowPredict::configure() is called with a valid model path.

Behavior Change

  • Before: GPU initialization triggered at import essentia.standard
  • After: GPU initialization occurs only when a TensorFlow model is actually configured

The session is still created before compute() runs, so runtime behavior for valid usage remains unchanged.
The only observable difference is that GPU probing logs now appear at model load time instead of import time.

Notes

Adds guards in destructor and reset() to handle uninitialized TF objects safely.
Prevents unnecessary GPU probing when TensorFlow algorithms are registered but never used.

The changes were implemented with AI assistance, and I’m not deeply familiar with all invariants in this codebase, so I’d appreciate careful review.

@palonso
Copy link
Copy Markdown
Contributor

palonso commented Mar 4, 2026

Hi @xiaden,
I've built and tested the PR locally, and it looks good to me.

@dbogdanov, we can merge it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants