Skip to content

[rls-v3.12] build: exclude Xe3p from default DNNL_ENABLE_PRIMITIVE_GPU_ISA#5105

Open
echeresh wants to merge 1 commit intorls-v3.12from
echeresh/xe3p-opt-in
Open

[rls-v3.12] build: exclude Xe3p from default DNNL_ENABLE_PRIMITIVE_GPU_ISA#5105
echeresh wants to merge 1 commit intorls-v3.12from
echeresh/xe3p-opt-in

Conversation

@echeresh
Copy link
Copy Markdown
Contributor

For background see MFDNN-15003.

Opening a PR to discuss and to map out potential side effects of making Xe3p support opt-in.

@github-actions github-actions Bot added documentation A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel backport component:build component:common labels Apr 30, 2026

#if BUILD_PRIMITIVE_GPU_ISA_ALL || BUILD_XE3P
// XE3P is excluded from BUILD_PRIMITIVE_GPU_ISA_ALL, opt in explicitly.
#if BUILD_XE3P
Copy link
Copy Markdown
Contributor

@rjoursler rjoursler May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need any gating on OpenCL implementations? For the most part, I would expect those to just work, but there are some workarounds for driver issues.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have any control on driver version at the user end, so I'd say no.

# Use oneDNN names for ALL to ensure string replacement functions correctly
set(GPUS ${DNNL_ENABLE_PRIMITIVE_GPU_ISA})
string(REPLACE "ALL" "XELP;XEHP;XEHPG;XEHPC;XE2;XE3;XE3P" GPUS "${GPUS}")
string(REPLACE "ALL" "XELP;XEHP;XEHPG;XEHPC;XE2;XE3" GPUS "${GPUS}")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be reasonable to use a runtime guard (via environment variable) rather than compile time? This way we can still use the same build for testing purpose.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From my perspective runtime opt-in is the best choice. It would be challenging to design and implement it in time for v3.12 RTM.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What we can do before the RTM is opt-in via ONEDNN_ENABLE_MAX_GPU_ISA environment variable (similar to CPU). But to limit the scope:

  1. No API, just the environment variable
  2. Opt-in is only via the environment variable - no changes in DNNL_ENABLE_PRIMITIVE_GPU_ISA behavior

@vpirogov @rjoursler Do you see issues with any of the bullets? Are frameworks fine with environment variable control for opt-in (if they still want to enable Xe3p platforms)?

Copy link
Copy Markdown
Contributor

@vpirogov vpirogov May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather focus on developing proper guard consistent with the way we handle environment variables for v3.13. Build time opt-in looks adequate for v3.12.

@echeresh
Copy link
Copy Markdown
Contributor Author

echeresh commented May 1, 2026

make test

@echeresh echeresh marked this pull request as ready for review May 6, 2026 00:35
@echeresh echeresh requested review from a team as code owners May 6, 2026 00:35
@echeresh
Copy link
Copy Markdown
Contributor Author

echeresh commented May 6, 2026

make test

Copy link
Copy Markdown
Contributor

@dzarukin dzarukin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport component:build component:common documentation A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants