Skip to content

RHAIENG-2852: Hermetic build for Jupyter tensorflow CUDA#3337

Merged
ysok merged 1 commit intoopendatahub-io:mainfrom
ysok-opendatahub-io:odh-RHAIENG-2852-jupyter-tensorflow-cuda
Apr 16, 2026
Merged

RHAIENG-2852: Hermetic build for Jupyter tensorflow CUDA#3337
ysok merged 1 commit intoopendatahub-io:mainfrom
ysok-opendatahub-io:odh-RHAIENG-2852-jupyter-tensorflow-cuda

Conversation

@ysok
Copy link
Copy Markdown
Contributor

@ysok ysok commented Apr 10, 2026

RHAIENG-2852: Hermetic build for Jupyter tensorflow CUDA

Description

  • Hermetic Dockerfile.cuda / Dockerfile.konflux.cuda: Cachi2 gomod mongocli, prefetched RPMs, DNF openshift-clients, uv pip install --no-index from requirements.cuda.txt.
  • Symlink jupyter/tensorflow/ubi9-python-3.12/prefetch-input → repo prefetch-input.
  • Tekton push/PR: hermetic: true, prefetch-input params, amd64 m4xlarge, build/clair/ecosystem resourcing updates.

How Has This Been Tested?

Self checklist (all need to be checked):

  • Ensure that you have run make test (gmake on macOS) before asking for review
  • Changes to everything except Dockerfile.konflux files should be done in odh/notebooks and automatically synced to rhds/notebooks. For Konflux-specific changes, modify Dockerfile.konflux files directly in rhds/notebooks as these require special attention in the downstream repository and flow to the upcoming RHOAI release.

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Chores

    • Switched builds to hermetic/offline mode and limited builds to a single amd64 target.
    • Added prefetched inputs for Go modules, RPMs and pip artifacts to enable offline builds.
    • Reduced compute requests/limits for scan/preflight tasks and set resource limits for build steps.
    • Added image expiry configuration for build outputs.
  • New Features

    • Updated Python tooling and refreshed pinned packages (micropipenv, uv, pip) with hermetic installs and updated lockfiles.

@openshift-ci openshift-ci Bot requested review from ayush17 and daniellutz April 10, 2026 04:46
@github-actions github-actions Bot added the review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel label Apr 10, 2026
@github-actions
Copy link
Copy Markdown
Contributor

@ysok — This PR is from a fork.
The build-rhoai CI job was skipped because subscription
builds (RHEL, AIPCC) need secrets unavailable to forks.
ODH builds and code quality checks still ran.

Recommended: Push your branch to the main repo for full CI:

git remote add upstream https://github.com/opendatahub-io/notebooks.git
git push upstream HEAD:ysok/your-branch-name

Then open a new PR from that branch.

No push access? A maintainer will cherry-pick and test your changes.

See CONTRIBUTING.md for details.

@openshift-ci openshift-ci Bot added the size/l label Apr 10, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 10, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

PipelineRun manifests and multiple TensorFlow UBI9 Python 3.12 Dockerfiles were converted to hermetic/offline builds: build targets narrowed to a single amd64 platform; added hermetic: 'true', prefetch-input entries, and adjusted taskRunSpecs (resource limits and new/modified task/step specs). Dockerfiles now use prefetched Go modules, RPMs, and pip wheels from local caches under /cachi2/output/deps, remove external downloads, change USER switching, and add rpms.lock.yaml cache-busting COPY layers. Python dependency lists and uv/pylock outputs were refreshed to include uv, micropipenv, and pinned package updates.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Security findings

  • CWE-347 (Improper Verification of Cryptographic Signature): pip installs switched to --no-verify-hashes and removed --require-hashes. Action: enforce artifact integrity by restoring hash verification or implementing signed provenance verification for prefetched wheels; validate pylock/requirements files before install.

  • CWE-347 / CWE-295 (Insufficient/Optional Key Validation): RPM GPG imports include || true, allowing silent continuation if key import fails. Action: fail the build on missing/invalid GPG keys for security-critical RPMs or explicitly document and mitigate any optional key use; validate imported keys’ fingerprints.

  • CWE-426 (Untrusted Search Path): GOPROXY and pip --find-links point to local caches without explicit integrity checks. Action: mount cache directories read-only in CI, verify checksums/signatures of cached artifacts prior to use, and record provenance metadata.

  • CWE-250 (Execution with Unnecessary Privileges): Multiple stages switch to USER root during installs. Action: minimize root scope, perform privileged steps in isolated build stages and ensure final images run non-root and have corrected ownership/permissions.

  • CWE-1104 (Use of Unmaintained Third-Party Components): New/updated packages (uv, micropipenv, newer pip) added. Action: run SCA/vulnerability scans on these exact versions, track CVEs, and approve or pin to vetted versions.

  • Operational risk (reduced scan resources): clair-scan and ecosystem-cert-preflight-checks resource limits reduced to 4 CPU / 8Gi. Action: validate scan completion under reduced resources, add retries/timeouts and alerting, and adjust limits if scans fail or time out.

Only actionable issues above are flagged; address integrity, key validation, least privilege, and dependency vulnerability verification.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning PR description includes the ticket reference, a concise summary of changes (hermetic Dockerfiles, symlink, Tekton updates), and addresses testing via the provided template; however, all checklist items remain unchecked and testing details are missing. Check the self-checklist items and merge criteria boxes; provide concrete testing steps and environment details to confirm hermetic builds function correctly.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed Title uses imperative mood, includes required JIRA ticket reference (RHAIENG-2852), and accurately describes the main change: implementing hermetic builds for Jupyter TensorFlow CUDA.
Branch Prefix Policy ✅ Passed PR targets main branch with title 'RHAIENG-2852: Hermetic build for Jupyter tensorflow CUDA (PR #3337)'. Title contains JIRA reference without branch prefix, complying with main branch requirements.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot added size/l and removed size/l labels Apr 10, 2026
@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Apr 10, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 3.59%. Comparing base (dbc5451) to head (c68d38a).
✅ All tests successful. No failed tests found.

Additional details and impacted files

Impacted file tree graph

@@          Coverage Diff          @@
##            main   #3337   +/-   ##
=====================================
  Coverage   3.59%   3.59%           
=====================================
  Files         29      29           
  Lines       3310    3310           
  Branches     527     527           
=====================================
  Hits         119     119           
  Misses      3189    3189           
  Partials       2       2           
Flag Coverage Δ
python 3.59% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update dbc5451...c68d38a. Read the comment docs.

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda (1)

79-83: Redundant GPG key imports in cuda-jupyter-minimal.

cuda-base (parent stage) already imports RPM-GPG-KEY-CentOS-Official and RPM-GPG-KEY-redhat-release at lines 49-50. Only the EPEL-9 key (line 82) is new here. The duplication is harmless but adds layer bloat.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` around lines 79
- 83, Remove the redundant GPG imports for RPM-GPG-KEY-CentOS-Official and
RPM-GPG-KEY-redhat-release from the cuda-jupyter-minimal stage in
Dockerfile.konflux.cuda because the parent stage (cuda-base) already imports
them; keep only the EPEL key import (the RUN line importing RPM-GPG-KEY-EPEL-9).
Locate the three RUN lines that call "rpm --import" for
RPM-GPG-KEY-CentOS-Official, RPM-GPG-KEY-EPEL-9, and RPM-GPG-KEY-redhat-release
and delete the two lines referencing RPM-GPG-KEY-CentOS-Official and
RPM-GPG-KEY-redhat-release so the stage no longer duplicates parent imports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml:
- Around line 30-31: The pipeline step sets image-expires-after: 5d which will
expire push-built images after 5 days; update the Tekton task/step that contains
the image-expires-after key to either remove that key for push/main builds or
set a longer TTL (or make it conditional based on the trigger) so release
artifacts from push builds are not automatically deleted (edit the entry with
the image-expires-after field in the YAML and implement removal/conditional
logic or change the value to an appropriate duration).

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda`:
- Around line 170-175: The install command disables pip's hash checking with
--no-verify-hashes, weakening supply-chain integrity; remove that flag and
enable pip's hash verification by ensuring ./requirements.txt contains pinned
package hashes and passing the --require-hashes option (instead of
--no-verify-hashes) to the uv pip install invocation (the command starting with
"UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install
...") so that cached/prefetched packages are verified.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda`:
- Around line 168-175: The uv pip install invocation currently uses the flag
--no-verify-hashes which disables pip's hash checking; change this to
--require-hashes so pip enforces package hashes at install time
(defense-in-depth even with Cachi2 prefetch). Update the UV_NO_CACHE... uv pip
install command (the line with --no-verify-hashes) to use --require-hashes and
ensure requirements.cuda.txt contains the matching hashes for all entries
referenced by the --requirements=./requirements.txt argument.

---

Nitpick comments:
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda`:
- Around line 79-83: Remove the redundant GPG imports for
RPM-GPG-KEY-CentOS-Official and RPM-GPG-KEY-redhat-release from the
cuda-jupyter-minimal stage in Dockerfile.konflux.cuda because the parent stage
(cuda-base) already imports them; keep only the EPEL key import (the RUN line
importing RPM-GPG-KEY-EPEL-9). Locate the three RUN lines that call "rpm
--import" for RPM-GPG-KEY-CentOS-Official, RPM-GPG-KEY-EPEL-9, and
RPM-GPG-KEY-redhat-release and delete the two lines referencing
RPM-GPG-KEY-CentOS-Official and RPM-GPG-KEY-redhat-release so the stage no
longer duplicates parent imports.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 2512b24e-e91d-49d1-9997-1d86f587f744

📥 Commits

Reviewing files that changed from the base of the PR and between 99e07e7 and d96fa1c.

📒 Files selected for processing (5)
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input

Comment on lines +30 to +31
- name: image-expires-after
value: 5d
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

image-expires-after: 5d on push builds may cause unintended image deletion.

Push builds to main typically produce release artifacts. A 5-day expiration could delete images before downstream consumers pull them. Verify this is intentional or remove the expiration for push builds.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
around lines 30 - 31, The pipeline step sets image-expires-after: 5d which will
expire push-built images after 5 days; update the Tekton task/step that contains
the image-expires-after key to either remove that key for push/main builds or
set a longer TTL (or make it conditional based on the trigger) so release
artifacts from push builds are not automatically deleted (edit the entry with
the image-expires-after field in the YAML and implement removal/conditional
logic or change the value to an appropriate duration).

Comment on lines +168 to 175
UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \
--strict --no-deps --no-config --no-progress \
--require-hashes --compile-bytecode --index-strategy=unsafe-best-match \
--requirements=./pylock.toml
### END Install software and packages
--compile-bytecode --index-strategy=unsafe-best-match \
--no-verify-hashes \
--find-links /cachi2/output/deps/pip \
--requirements=./requirements.txt

# setup path for runtime configuration
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Does uv pip install support --require-hashes with requirements.txt files?

💡 Result:

Yes. uv pip install supports pip-style hash-checking for requirements.txt via --require-hashes (and it also verifies any hashes present even without --require-hashes, by default). When --require-hashes is enabled, every requirement must have hashes, and requirements must be pinned to exact versions (==...) or be direct-URL requirements; git/editable installs aren’t supported in this mode. [1]

Source:
[1] uv CLI reference (uv pip install / uv pip sync options: --require-hashes, hash verification behavior, and constraints) (docs.astral.sh)

Citations:


🏁 Script executed:

#!/bin/bash
# Find the Dockerfile and check its directory structure
find . -name "*Dockerfile.konflux.cuda" -type f

Repository: opendatahub-io/notebooks

Length of output: 569


🏁 Script executed:

#!/bin/bash
# Search for requirements.txt and pylock files near the Dockerfile
find . -path "*tensorflow/ubi9-python-3.12*" -type f | head -20

Repository: opendatahub-io/notebooks

Length of output: 1360


🏁 Script executed:

#!/bin/bash
# Check if requirements.txt contains hashes
find . -name "requirements.txt" -path "*tensorflow/ubi9-python-3.12*" -exec cat {} \; | head -50

Repository: opendatahub-io/notebooks

Length of output: 50


🏁 Script executed:

#!/bin/bash
# Read the Dockerfile to see which requirements file is used
cat -n ./jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda | grep -A5 -B5 "requirements"

Repository: opendatahub-io/notebooks

Length of output: 1527


🏁 Script executed:

#!/bin/bash
# Check for pylock.toml in tensorflow directory
ls -la ./jupyter/tensorflow/ubi9-python-3.12/ | grep -E "(pylock|requirements)"

Repository: opendatahub-io/notebooks

Length of output: 139


🏁 Script executed:

#!/bin/bash
# Check if requirements.cuda.txt exists and read first 30 lines
if [ -f ./jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt ]; then
  echo "=== requirements.cuda.txt ===" 
  head -30 ./jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
fi

Repository: opendatahub-io/notebooks

Length of output: 3100


🏁 Script executed:

#!/bin/bash
# Check for pylock.toml
if [ -f ./jupyter/tensorflow/ubi9-python-3.12/pylock.toml ]; then
  echo "pylock.toml exists"
  head -20 ./jupyter/tensorflow/ubi9-python-3.12/pylock.toml
else
  echo "pylock.toml NOT found in tensorflow/ubi9-python-3.12"
fi

Repository: opendatahub-io/notebooks

Length of output: 120


--no-verify-hashes weakens supply chain integrity (CWE-494: Download of Code Without Integrity Check).

The requirements.cuda.txt file contains hashes for all packages and all requirements are pinned to exact versions. Replace --no-verify-hashes with --require-hashes to enforce hash verification at install time, even though Cachi2 performs verification at prefetch. This provides defense-in-depth against cache mount tampering.

Proposed fix
 UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \
     --strict --no-deps --no-config --no-progress \
     --compile-bytecode --index-strategy=unsafe-best-match \
-    --no-verify-hashes \
+    --require-hashes \
     --find-links /cachi2/output/deps/pip \
     --requirements=./requirements.txt
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \
--strict --no-deps --no-config --no-progress \
--require-hashes --compile-bytecode --index-strategy=unsafe-best-match \
--requirements=./pylock.toml
### END Install software and packages
--compile-bytecode --index-strategy=unsafe-best-match \
--no-verify-hashes \
--find-links /cachi2/output/deps/pip \
--requirements=./requirements.txt
# setup path for runtime configuration
UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \
--strict --no-deps --no-config --no-progress \
--compile-bytecode --index-strategy=unsafe-best-match \
--require-hashes \
--find-links /cachi2/output/deps/pip \
--requirements=./requirements.txt
# setup path for runtime configuration
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` around lines 168
- 175, The uv pip install invocation currently uses the flag --no-verify-hashes
which disables pip's hash checking; change this to --require-hashes so pip
enforces package hashes at install time (defense-in-depth even with Cachi2
prefetch). Update the UV_NO_CACHE... uv pip install command (the line with
--no-verify-hashes) to use --require-hashes and ensure requirements.cuda.txt
contains the matching hashes for all entries referenced by the
--requirements=./requirements.txt argument.

@ysok ysok force-pushed the odh-RHAIENG-2852-jupyter-tensorflow-cuda branch from d96fa1c to 429c34e Compare April 10, 2026 05:06
@openshift-ci openshift-ci Bot added size/l and removed size/l labels Apr 10, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml (1)

90-105: Task-level vs step-level computeResources: intentional difference in approach.

clair-scan and ecosystem-cert-preflight-checks use task-level computeResources (applies to all steps), while build-images uses stepSpecs with step-specific resources. Both are valid Tekton v1 patterns, but the inconsistency may complicate future maintenance.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
@.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml
around lines 90 - 105, The pipeline uses mixed resource declarations which is
inconsistent: pipeline tasks "clair-scan" and "ecosystem-cert-preflight-checks"
declare computeResources at the task level while "build-images" uses stepSpecs
with per-step resources; pick one consistent pattern and update the other tasks
to match—either convert "clair-scan" and "ecosystem-cert-preflight-checks" to
use stepSpecs with per-step computeResources (mirroring build-images) or move
build-images step-specific resources to a task-level computeResources
block—ensure you update the pipelineTaskName entries ("clair-scan",
"ecosystem-cert-preflight-checks", "build-images") accordingly and keep resource
requests/limits equivalent during the migration.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
@.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml:
- Around line 90-105: The pipeline uses mixed resource declarations which is
inconsistent: pipeline tasks "clair-scan" and "ecosystem-cert-preflight-checks"
declare computeResources at the task level while "build-images" uses stepSpecs
with per-step resources; pick one consistent pattern and update the other tasks
to match—either convert "clair-scan" and "ecosystem-cert-preflight-checks" to
use stepSpecs with per-step computeResources (mirroring build-images) or move
build-images step-specific resources to a task-level computeResources
block—ensure you update the pipelineTaskName entries ("clair-scan",
"ecosystem-cert-preflight-checks", "build-images") accordingly and keep resource
requests/limits equivalent during the migration.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: bb1ab961-c640-42b2-b788-839d4541f321

📥 Commits

Reviewing files that changed from the base of the PR and between d96fa1c and 429c34e.

📒 Files selected for processing (8)
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input
  • jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
  • jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
  • jupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
✅ Files skipped from review due to trivial changes (3)
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input
  • jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
  • jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (3)
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda

@ysok ysok force-pushed the odh-RHAIENG-2852-jupyter-tensorflow-cuda branch from 429c34e to cc6b650 Compare April 10, 2026 13:53
@openshift-ci openshift-ci Bot added size/l and removed size/l labels Apr 10, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda (2)

160-175: pylock.toml is dead input in this stage.

The install path consumes ./requirements.txt; nothing here reads ./pylock.toml. Keeping the copied lockfile and “from lockfile” comments makes the build look stricter than it is.

Proposed fix
-# Install Python packages and Jupyterlab extensions from lockfile (requirements.cuda.txt used only for Cachi2 prefetch)
-COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml
+# Install Python packages and JupyterLab extensions from the fully pinned requirements file
 COPY ${TENSORFLOW_SOURCE_CODE}/requirements.${PYLOCK_FLAVOR}.txt ./requirements.txt
@@
-# Install Python packages from lockfile (hermetic: use Cachi2 prefetched pip deps)
-# All dependencies are explicitly listed in pylock.toml (--no-deps)
+# Install Python packages from the fully pinned requirements file (hermetic: use Cachi2 prefetched pip deps)
+# All dependencies are explicitly listed in requirements.txt (--no-deps)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda` around lines 160 - 175,
The Dockerfile stage copies pylock.toml but the RUN uses only
./requirements.txt, so remove the dead input and misleading comment: delete the
COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml
./pylock.toml line and update the preceding comment text in the RUN block to
reflect that pip installs come only from requirements.${PYLOCK_FLAVOR}.txt (or,
alternatively, wire pylock.toml into the UV pip command if you intend to use the
lockfile); check the COPY of requirements.${PYLOCK_FLAVOR}.txt and the RUN block
(UV pip install --requirements=./requirements.txt) when making the change to
keep artifacts consistent.

61-61: Pin the bootstrap toolchain versions to match the lockfile.

The unpinned micropipenv and uv packages should reference the versions already tracked in uv.lock.d/pylock.cuda.toml. While --no-index --find-links typically resolves to a single prefetched wheel in hermetic builds, explicitly pinning these versions in the Dockerfile aligns with the documented principle that "every package is pinned by URL + SHA-256 checksum in committed lockfiles" (per the hermetic-guide.md). This ensures the bootstrap step is reproducible and auditable.

Proposed fix
-RUN pip install --no-cache-dir --no-index --find-links /cachi2/output/deps/pip "micropipenv[toml]" "uv"
+RUN pip install --no-cache-dir --no-index --find-links /cachi2/output/deps/pip \
+    "micropipenv[toml]==1.10.0" \
+    "uv==0.11.3"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda` at line 61, The RUN pip
install line in Dockerfile.cuda installs unpinned bootstrap tools
"micropipenv[toml]" and "uv"; update that command to pin both packages to the
exact wheel URLs+hashes (or explicit versions) recorded in the lockfile
uv.lock.d/pylock.cuda.toml so the bootstrap step is reproducible and
auditable—replace the loose package specs in the RUN pip install command with
the matching entries from uv.lock.d/pylock.cuda.toml (use the exact URL and
SHA-256 or the exact version spec that appears in the lockfile for micropipenv
and uv).
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda (2)

158-173: pylock.toml is copied but never used in this stage.

This stage installs from ./requirements.txt, not from ./pylock.toml. Keeping the unused lockfile copy and the “from lockfile” comments here is misleading when debugging provenance or reproducibility.

Proposed fix
-# Install Python packages and Jupyterlab extensions from lockfile (requirements.cuda.txt used only for Cachi2 prefetch)
-COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml
+# Install Python packages and JupyterLab extensions from the fully pinned requirements file
 COPY ${TENSORFLOW_SOURCE_CODE}/requirements.${PYLOCK_FLAVOR}.txt ./requirements.txt
@@
-# Install Python packages from lockfile (hermetic: use Cachi2 prefetched pip deps)
-# All dependencies are explicitly listed in pylock.toml (--no-deps)
+# Install Python packages from the fully pinned requirements file (hermetic: use Cachi2 prefetched pip deps)
+# All dependencies are explicitly listed in requirements.txt (--no-deps)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` around lines 158
- 173, The Dockerfile copies pylock.toml but never uses it (the RUN uses
./requirements.txt and the comment "from lockfile" is misleading); either remove
the COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml
./pylock.toml line and update the surrounding comment to reflect that
installation uses requirements.txt, or change the UV pip invocation (the UV pip
install command) to consume the lockfile (e.g., pass the appropriate
pylock/lockfile option) so pylock.toml is actually used; update or remove the
"from lockfile" text accordingly and ensure only the relevant file (pylock.toml
or requirements.txt) is copied and referenced.

61-61: Pin the bootstrap toolchain versions for reproducible hermetic builds.

Line 61 installs micropipenv and uv by name only, even though exact versions (1.10.0 and 0.11.3) are recorded in jupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml. With --no-index --find-links, pip will select from available wheels in /cachi2/output/deps/pip. If the cache contains multiple versions, rebuilds can silently switch the installer toolchain, compromising build reproducibility.

Proposed fix
-RUN pip install --no-cache-dir --no-index --find-links /cachi2/output/deps/pip "micropipenv[toml]" "uv"
+RUN pip install --no-cache-dir --no-index --find-links /cachi2/output/deps/pip \
+    "micropipenv[toml]==1.10.0" \
+    "uv==0.11.3"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` at line 61, The
pip install in the Dockerfile.konflux.cuda currently installs "micropipenv" and
"uv" without versions which can lead to nondeterministic selection from
/cachi2/output/deps/pip; update the RUN pip install line to pin
micropipenv==1.10.0 and uv==0.11.3 (the versions recorded in
uv.lock.d/pylock.cuda.toml) so the builder selects the exact wheel; ensure the
pinned versions match the lockfile entries and keep the existing --no-index
--find-links flags.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
@.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml:
- Around line 50-62: The pipeline prefetch inputs omit the RHDS RPM source
referenced by the Dockerfiles, so add a hermetic input entry for
prefetch-input/rhds to the prefetch-input list to ensure the RHDS rpms.lock.yaml
is included; specifically, update the prefetch-input value to include an entry
with path: prefetch-input/rhds and type: rpm (matching how other RPM inputs are
declared) so jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda and
Dockerfile.konflux.cuda cache-busting on prefetch-input/rhds/rpms.lock.yaml is
covered.

---

Nitpick comments:
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda`:
- Around line 160-175: The Dockerfile stage copies pylock.toml but the RUN uses
only ./requirements.txt, so remove the dead input and misleading comment: delete
the COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml
./pylock.toml line and update the preceding comment text in the RUN block to
reflect that pip installs come only from requirements.${PYLOCK_FLAVOR}.txt (or,
alternatively, wire pylock.toml into the UV pip command if you intend to use the
lockfile); check the COPY of requirements.${PYLOCK_FLAVOR}.txt and the RUN block
(UV pip install --requirements=./requirements.txt) when making the change to
keep artifacts consistent.
- Line 61: The RUN pip install line in Dockerfile.cuda installs unpinned
bootstrap tools "micropipenv[toml]" and "uv"; update that command to pin both
packages to the exact wheel URLs+hashes (or explicit versions) recorded in the
lockfile uv.lock.d/pylock.cuda.toml so the bootstrap step is reproducible and
auditable—replace the loose package specs in the RUN pip install command with
the matching entries from uv.lock.d/pylock.cuda.toml (use the exact URL and
SHA-256 or the exact version spec that appears in the lockfile for micropipenv
and uv).

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda`:
- Around line 158-173: The Dockerfile copies pylock.toml but never uses it (the
RUN uses ./requirements.txt and the comment "from lockfile" is misleading);
either remove the COPY
${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml
line and update the surrounding comment to reflect that installation uses
requirements.txt, or change the UV pip invocation (the UV pip install command)
to consume the lockfile (e.g., pass the appropriate pylock/lockfile option) so
pylock.toml is actually used; update or remove the "from lockfile" text
accordingly and ensure only the relevant file (pylock.toml or requirements.txt)
is copied and referenced.
- Line 61: The pip install in the Dockerfile.konflux.cuda currently installs
"micropipenv" and "uv" without versions which can lead to nondeterministic
selection from /cachi2/output/deps/pip; update the RUN pip install line to pin
micropipenv==1.10.0 and uv==0.11.3 (the versions recorded in
uv.lock.d/pylock.cuda.toml) so the builder selects the exact wheel; ensure the
pinned versions match the lockfile entries and keep the existing --no-index
--find-links flags.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: c8d8f097-19b1-4e7b-9f90-f988c93dfb61

📥 Commits

Reviewing files that changed from the base of the PR and between 429c34e and cc6b650.

📒 Files selected for processing (8)
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input
  • jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
  • jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
  • jupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
✅ Files skipped from review due to trivial changes (3)
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input
  • jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
  • jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
🚧 Files skipped from review as they are similar to previous changes (1)
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml

@openshift-ci openshift-ci Bot added size/l and removed size/l labels Apr 13, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml (1)

50-62: ⚠️ Potential issue | 🟠 Major

Add the missing prefetch-input/rhds RPM source.

This PipelineRun still does not declare prefetch-input/rhds, even though both CUDA Dockerfiles bust cache on prefetch-input/rhds/rpms.lock.yaml. That leaves the RHDS RPM set outside the hermetic input contract and can still force a network fetch or reuse stale data.

Proposed fix
   - name: prefetch-input
     value:
     - path: prefetch-input/mongocli
       type: gomod
     - path: prefetch-input/odh
       type: rpm
     - path: prefetch-input/odh
       type: generic
+    - path: prefetch-input/rhds
+      type: rpm
     - path: jupyter/tensorflow/ubi9-python-3.12
       type: pip
       binary:
         arch: x86_64
       requirements_files: [requirements.cuda.txt]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
@.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml
around lines 50 - 62, The prefetch-input list is missing the rhds RPM source
causing RHDS rpms.lock.yaml cache busts to be unaccounted for; add a new entry
matching the other RPM entries by inserting an item with path:
prefetch-input/rhds and type: rpm into the prefetch-input value array (alongside
prefetch-input/mongocli and prefetch-input/odh) so the RHDS RPM set is declared
in the PipelineRun inputs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In
@.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml:
- Around line 50-62: The prefetch-input list is missing the rhds RPM source
causing RHDS rpms.lock.yaml cache busts to be unaccounted for; add a new entry
matching the other RPM entries by inserting an item with path:
prefetch-input/rhds and type: rpm into the prefetch-input value array (alongside
prefetch-input/mongocli and prefetch-input/odh) so the RHDS RPM set is declared
in the PipelineRun inputs.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: fe419d87-6842-46ae-9423-658e00806ba5

📥 Commits

Reviewing files that changed from the base of the PR and between cc6b650 and 78118a8.

📒 Files selected for processing (8)
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input
  • jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
  • jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
  • jupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
✅ Files skipped from review due to trivial changes (3)
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input
  • jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
  • jupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml

@ysok ysok force-pushed the odh-RHAIENG-2852-jupyter-tensorflow-cuda branch from 78118a8 to f58feae Compare April 16, 2026 12:40
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda`:
- Around line 47-50: The RPM key import line using "rpm --import
/cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true" is silently
masking import failures; remove the "|| true" and replace this behavior with an
explicit existence check and failing import handling: ensure the Dockerfile's
rpm import step (the "rpm --import
/cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official" and the "rpm --import
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" commands) only skips when the key
file is truly absent (e.g., test for file presence) but fails the build if the
file exists and rpm returns an error, so import errors surface and stop the
build instead of being swallowed.
- Around line 170-175: The uv pip install invocation (the command using
UV_NO_CACHE... uv pip install) must explicitly target the system Python because
setting VIRTUAL_ENV alone is not sufficient; update the uv invocation (the uv
pip install command) to include the flags --python /usr/bin/python3.12 --system
so uv uses the base interpreter rather than expecting an active venv, keeping
the rest of the flags (--no-index, --strict, --no-deps, --no-config,
--no-progress, --compile-bytecode, --index-strategy=unsafe-best-match,
--no-verify-hashes, --find-links, --requirements) unchanged.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda`:
- Around line 47-50: The Dockerfile currently silences rpm import failures with
"|| true" (affecting the rpm --import of RPM-GPG-KEY-CentOS-Official), which can
hide corrupted or unreadable keys; replace the suppression with an existence
check for the optional CentOS key and only skip import if the file truly doesn't
exist (e.g., test -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official
then run rpm --import), and remove "|| true" so that the mandatory import of
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release runs without suppression and fails
the build on error; also add a clear echo/log message before failing imports to
provide actionable failure context for RPM-GPG-KEY-CentOS-Official and
RPM-GPG-KEY-redhat-release.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)

Review profile: CHILL

Plan: Pro Plus

Run ID: 1ccf47ef-07ab-4952-a824-f61e128fc4fc

📥 Commits

Reviewing files that changed from the base of the PR and between 78118a8 and f58feae.

📒 Files selected for processing (8)
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
  • jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input
  • jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
  • jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
  • jupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
✅ Files skipped from review due to trivial changes (2)
  • jupyter/tensorflow/ubi9-python-3.12/prefetch-input
  • jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (1)
  • .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml

Comment on lines +47 to +50
# [HERMETIC] Import GPG keys for prefetched RPM verification.
# CentOS key imported only if prefetched (present in Dockerfile.cpu; may be absent in Konflux).
RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stop swallowing RPM key import failures (CWE-345).

|| true turns a broken trust-root import into a silent success. If one of these prefetched key files is present but truncated, unreadable, or otherwise invalid, the build keeps going with an incomplete RPM verification setup.

Proposed fix
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official ]; then \
+      rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official; \
+    fi
 RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
@@
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9 || true
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official ]; then \
+      rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official; \
+    fi
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9 ]; then \
+      rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9; \
+    fi
 RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

As per coding guidelines, **: REVIEW PRIORITIES: 1. Security vulnerabilities (provide severity, exploit scenario, and remediation code).

Also applies to: 79-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda` around lines 47 - 50,
The RPM key import line using "rpm --import
/cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true" is silently
masking import failures; remove the "|| true" and replace this behavior with an
explicit existence check and failing import handling: ensure the Dockerfile's
rpm import step (the "rpm --import
/cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official" and the "rpm --import
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" commands) only skips when the key
file is truly absent (e.g., test for file presence) but fails the build if the
file exists and rpm returns an error, so import errors surface and stop the
build instead of being swallowed.

Comment on lines +170 to +175
UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \
--strict --no-deps --no-config --no-progress \
--require-hashes --compile-bytecode --index-strategy=unsafe-best-match \
--requirements=./pylock.toml
### END Install software and packages
--compile-bytecode --index-strategy=unsafe-best-match \
--no-verify-hashes \
--find-links /cachi2/output/deps/pip \
--requirements=./requirements.txt
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the specific Dockerfile
fd -t f "Dockerfile.cuda" | grep -i "ubi9-python-3.12"

Repository: opendatahub-io/notebooks

Length of output: 612


🏁 Script executed:

#!/bin/bash
# Examine the file at the specified lines
if [ -f "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda" ]; then
    echo "=== File content around lines 170-175 ==="
    sed -n '165,180p' "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda"
    echo ""
    echo "=== Check base image and full context ==="
    head -20 "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda"
fi

Repository: opendatahub-io/notebooks

Length of output: 1533


🏁 Script executed:

#!/bin/bash
# Search for other uv pip install patterns in the codebase to understand usage
rg "uv pip install" --type dockerfile -B 2 -A 2

Repository: opendatahub-io/notebooks

Length of output: 102


🏁 Script executed:

#!/bin/bash
# Search for other uv pip install patterns without type restriction
rg "uv pip install" -B 2 -A 2 | head -100

Repository: opendatahub-io/notebooks

Length of output: 10425


🏁 Script executed:

#!/bin/bash
# Check more context around the uv pip install command and see what comes after
sed -n '170,190p' "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda"

Repository: opendatahub-io/notebooks

Length of output: 1145


🏁 Script executed:

#!/bin/bash
# Check if there's any explicit FROM statement and WORKDIR to understand the Python environment
grep -n "^FROM\|^WORKDIR\|^ENV.*PYTHON\|^RUN.*python" "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda" | head -30

Repository: opendatahub-io/notebooks

Length of output: 589


🏁 Script executed:

#!/bin/bash
# Look for where BASE_IMAGE is set/defined
grep -r "BASE_IMAGE" jupyter/tensorflow/ubi9-python-3.12/ | head -20

Repository: opendatahub-io/notebooks

Length of output: 644


🏁 Script executed:

#!/bin/bash
# Check if there's a build context or script that defines BASE_IMAGE
find . -name "*.py" -o -name "*.sh" | xargs grep -l "BASE_IMAGE.*ubi9.*3.12" | head -5

Repository: opendatahub-io/notebooks

Length of output: 50


🏁 Script executed (no clone):

# Search for uv pip documentation about --system vs venv behavior
curl -s "https://docs.astral.sh/uv/pip/environments/" | grep -i "system\|venv\|virtual" | head -20

Length of output: 2234


🏁 Script executed:

#!/bin/bash
# Search for documentation or comments about uv behavior in the repo
rg "uv pip|--system|--python" -B 3 -A 3 | grep -i "virtual\|venv\|system\|default" | head -30

Repository: opendatahub-io/notebooks

Length of output: 18839


🏁 Script executed:

#!/bin/bash
# Check the Dockerfile more completely to see the Python setup and where uv is installed
sed -n '1,100p' "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda" | grep -n "python\|uv\|WORKDIR\|RUN\|FROM"

Repository: opendatahub-io/notebooks

Length of output: 1015


🏁 Script executed:

#!/bin/bash
# Check if any other uv pip install commands in the codebase use --system or --python
rg "uv pip install.*--system|uv pip install.*--python" -A 2 -B 2

Repository: opendatahub-io/notebooks

Length of output: 50


🏁 Script executed:

#!/bin/bash
# Check the base image Dockerfile to see if VIRTUAL_ENV or venv is set up there
find . -path "*/base-images/cuda/*/ubi9-python-3.12/Dockerfile*" | head -1 | xargs head -50

Repository: opendatahub-io/notebooks

Length of output: 1204


🏁 Script executed:

#!/bin/bash
# Also check if there's any documentation about the build process or environment setup
grep -r "VIRTUAL_ENV\|venv\|--system" --include="*.md" --include="*.txt" | grep -i "uv\|python" | head -10

Repository: opendatahub-io/notebooks

Length of output: 165


Explicitly target system Python for uv pip install.

uv requires an active virtual environment or explicit Python targeting by default. Setting VIRTUAL_ENV environment variable alone is insufficient—uv will not use it without activation or explicit flags. This command will fail or behave unpredictably if the base image doesn't implicitly provide a detected environment. Make the Python target explicit with --python /usr/bin/python3.12 --system.

Proposed fix
-UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \
+UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --python /usr/bin/python3.12 --system --no-index \
     --strict --no-deps --no-config --no-progress \
     --compile-bytecode --index-strategy=unsafe-best-match \
     --no-verify-hashes \
     --find-links /cachi2/output/deps/pip \
     --requirements=./requirements.txt
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda` around lines 170 - 175,
The uv pip install invocation (the command using UV_NO_CACHE... uv pip install)
must explicitly target the system Python because setting VIRTUAL_ENV alone is
not sufficient; update the uv invocation (the uv pip install command) to include
the flags --python /usr/bin/python3.12 --system so uv uses the base interpreter
rather than expecting an active venv, keeping the rest of the flags (--no-index,
--strict, --no-deps, --no-config, --no-progress, --compile-bytecode,
--index-strategy=unsafe-best-match, --no-verify-hashes, --find-links,
--requirements) unchanged.

Comment on lines +47 to +50
# [HERMETIC] Import GPG keys for prefetched RPM verification.
# CentOS key imported only if prefetched (present in Dockerfile.cpu; may be absent in Konflux).
RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stop swallowing RPM key import failures (CWE-345).

|| true hides both the expected “key not present” case and the bad cases: corrupted key material, unreadable files, or rpm import errors. That lets the build continue with a silently incomplete trust bootstrap before hermetic RPM installs.

Proposed fix
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official ]; then \
+      rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official; \
+    fi
 RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
@@
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9 || true
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official ]; then \
+      rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official; \
+    fi
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9 ]; then \
+      rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9; \
+    fi
 RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

As per coding guidelines, **: REVIEW PRIORITIES: 1. Security vulnerabilities (provide severity, exploit scenario, and remediation code).

Also applies to: 79-83

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` around lines 47
- 50, The Dockerfile currently silences rpm import failures with "|| true"
(affecting the rpm --import of RPM-GPG-KEY-CentOS-Official), which can hide
corrupted or unreadable keys; replace the suppression with an existence check
for the optional CentOS key and only skip import if the file truly doesn't exist
(e.g., test -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official then run
rpm --import), and remove "|| true" so that the mandatory import of
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release runs without suppression and fails
the build on error; also add a clear echo/log message before failing imports to
provide actionable failure context for RPM-GPG-KEY-CentOS-Official and
RPM-GPG-KEY-redhat-release.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 16, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jiridanek

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ysok ysok force-pushed the odh-RHAIENG-2852-jupyter-tensorflow-cuda branch from f58feae to a55db07 Compare April 16, 2026 19:30
@openshift-ci openshift-ci Bot removed the lgtm label Apr 16, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 16, 2026

New changes are detected. LGTM label has been removed.

- Hermetic Dockerfile.cuda / Dockerfile.konflux.cuda: Cachi2 gomod mongocli, prefetched RPMs, DNF openshift-clients, uv pip install --no-index from requirements.cuda.txt.
- Symlink jupyter/tensorflow/ubi9-python-3.12/prefetch-input → repo prefetch-input.
- Tekton push/PR: hermetic: true, prefetch-input params, amd64 m4xlarge, build/clair/ecosystem resourcing updates.
@ysok ysok force-pushed the odh-RHAIENG-2852-jupyter-tensorflow-cuda branch from a55db07 to c68d38a Compare April 16, 2026 19:48
@openshift-ci openshift-ci Bot added size/xl and removed size/xl labels Apr 16, 2026
@ysok
Copy link
Copy Markdown
Contributor Author

ysok commented Apr 16, 2026

@ysok ysok merged commit 253ec7d into opendatahub-io:main Apr 16, 2026
20 of 23 checks passed
@ysok
Copy link
Copy Markdown
Contributor Author

ysok commented Apr 16, 2026

Merged too soon, caused build pipelin to be cancelled, but the actual build-image was success:

[5/5] COMMIT quay.io/opendatahub/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9:on-pr-c68d38a45e7f40329b496af0b68902656ffb4ac0-linux-d160-m4xlarge-amd64 --> 0100dea536d3 [Warning] one or more build args were not consumed: [INDEX_URL] Successfully tagged quay.io/opendatahub/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9:on-pr-c68d38a45e7f40329b496af0b68902656ffb4ac0-linux-d160-m4xlarge-amd64 0100dea536d354c68bf84f33a14a95f53673d513ea575fdaa8eccf5329f87449 [2026-04-16T20:18:30,816952676+00:00] Unsetting proxy [2026-04-16T20:18:30,818710958+00:00] Add metadata Making copy of sbom-prefetch.json Recording base image digests used

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel size/xl

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants