RHAIENG-2852: Hermetic build for Jupyter tensorflow CUDA#3337
Conversation
|
@ysok — This PR is from a fork. Recommended: Push your branch to the main repo for full CI: Then open a new PR from that branch. No push access? A maintainer will cherry-pick and test your changes. See CONTRIBUTING.md for details. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughPipelineRun manifests and multiple TensorFlow UBI9 Python 3.12 Dockerfiles were converted to hermetic/offline builds: build targets narrowed to a single amd64 platform; added Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Security findings
Only actionable issues above are flagged; address integrity, key validation, least privilege, and dependency vulnerability verification. 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3337 +/- ##
=====================================
Coverage 3.59% 3.59%
=====================================
Files 29 29
Lines 3310 3310
Branches 527 527
=====================================
Hits 119 119
Misses 3189 3189
Partials 2 2
Flags with carried forward coverage won't be shown. Click here to find out more. Continue to review full report in Codecov by Sentry.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda (1)
79-83: Redundant GPG key imports in cuda-jupyter-minimal.
cuda-base(parent stage) already importsRPM-GPG-KEY-CentOS-OfficialandRPM-GPG-KEY-redhat-releaseat lines 49-50. Only the EPEL-9 key (line 82) is new here. The duplication is harmless but adds layer bloat.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` around lines 79 - 83, Remove the redundant GPG imports for RPM-GPG-KEY-CentOS-Official and RPM-GPG-KEY-redhat-release from the cuda-jupyter-minimal stage in Dockerfile.konflux.cuda because the parent stage (cuda-base) already imports them; keep only the EPEL key import (the RUN line importing RPM-GPG-KEY-EPEL-9). Locate the three RUN lines that call "rpm --import" for RPM-GPG-KEY-CentOS-Official, RPM-GPG-KEY-EPEL-9, and RPM-GPG-KEY-redhat-release and delete the two lines referencing RPM-GPG-KEY-CentOS-Official and RPM-GPG-KEY-redhat-release so the stage no longer duplicates parent imports.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml:
- Around line 30-31: The pipeline step sets image-expires-after: 5d which will
expire push-built images after 5 days; update the Tekton task/step that contains
the image-expires-after key to either remove that key for push/main builds or
set a longer TTL (or make it conditional based on the trigger) so release
artifacts from push builds are not automatically deleted (edit the entry with
the image-expires-after field in the YAML and implement removal/conditional
logic or change the value to an appropriate duration).
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda`:
- Around line 170-175: The install command disables pip's hash checking with
--no-verify-hashes, weakening supply-chain integrity; remove that flag and
enable pip's hash verification by ensuring ./requirements.txt contains pinned
package hashes and passing the --require-hashes option (instead of
--no-verify-hashes) to the uv pip install invocation (the command starting with
"UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install
...") so that cached/prefetched packages are verified.
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda`:
- Around line 168-175: The uv pip install invocation currently uses the flag
--no-verify-hashes which disables pip's hash checking; change this to
--require-hashes so pip enforces package hashes at install time
(defense-in-depth even with Cachi2 prefetch). Update the UV_NO_CACHE... uv pip
install command (the line with --no-verify-hashes) to use --require-hashes and
ensure requirements.cuda.txt contains the matching hashes for all entries
referenced by the --requirements=./requirements.txt argument.
---
Nitpick comments:
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda`:
- Around line 79-83: Remove the redundant GPG imports for
RPM-GPG-KEY-CentOS-Official and RPM-GPG-KEY-redhat-release from the
cuda-jupyter-minimal stage in Dockerfile.konflux.cuda because the parent stage
(cuda-base) already imports them; keep only the EPEL key import (the RUN line
importing RPM-GPG-KEY-EPEL-9). Locate the three RUN lines that call "rpm
--import" for RPM-GPG-KEY-CentOS-Official, RPM-GPG-KEY-EPEL-9, and
RPM-GPG-KEY-redhat-release and delete the two lines referencing
RPM-GPG-KEY-CentOS-Official and RPM-GPG-KEY-redhat-release so the stage no
longer duplicates parent imports.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 2512b24e-e91d-49d1-9997-1d86f587f744
📒 Files selected for processing (5)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yamljupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cudajupyter/tensorflow/ubi9-python-3.12/prefetch-input
| - name: image-expires-after | ||
| value: 5d |
There was a problem hiding this comment.
image-expires-after: 5d on push builds may cause unintended image deletion.
Push builds to main typically produce release artifacts. A 5-day expiration could delete images before downstream consumers pull them. Verify this is intentional or remove the expiration for push builds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
around lines 30 - 31, The pipeline step sets image-expires-after: 5d which will
expire push-built images after 5 days; update the Tekton task/step that contains
the image-expires-after key to either remove that key for push/main builds or
set a longer TTL (or make it conditional based on the trigger) so release
artifacts from push builds are not automatically deleted (edit the entry with
the image-expires-after field in the YAML and implement removal/conditional
logic or change the value to an appropriate duration).
| UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \ | ||
| --strict --no-deps --no-config --no-progress \ | ||
| --require-hashes --compile-bytecode --index-strategy=unsafe-best-match \ | ||
| --requirements=./pylock.toml | ||
| ### END Install software and packages | ||
| --compile-bytecode --index-strategy=unsafe-best-match \ | ||
| --no-verify-hashes \ | ||
| --find-links /cachi2/output/deps/pip \ | ||
| --requirements=./requirements.txt | ||
|
|
||
| # setup path for runtime configuration |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does uv pip install support --require-hashes with requirements.txt files?
💡 Result:
Yes. uv pip install supports pip-style hash-checking for requirements.txt via --require-hashes (and it also verifies any hashes present even without --require-hashes, by default). When --require-hashes is enabled, every requirement must have hashes, and requirements must be pinned to exact versions (==...) or be direct-URL requirements; git/editable installs aren’t supported in this mode. [1]
Source:
[1] uv CLI reference (uv pip install / uv pip sync options: --require-hashes, hash verification behavior, and constraints) (docs.astral.sh)
Citations:
🏁 Script executed:
#!/bin/bash
# Find the Dockerfile and check its directory structure
find . -name "*Dockerfile.konflux.cuda" -type fRepository: opendatahub-io/notebooks
Length of output: 569
🏁 Script executed:
#!/bin/bash
# Search for requirements.txt and pylock files near the Dockerfile
find . -path "*tensorflow/ubi9-python-3.12*" -type f | head -20Repository: opendatahub-io/notebooks
Length of output: 1360
🏁 Script executed:
#!/bin/bash
# Check if requirements.txt contains hashes
find . -name "requirements.txt" -path "*tensorflow/ubi9-python-3.12*" -exec cat {} \; | head -50Repository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Read the Dockerfile to see which requirements file is used
cat -n ./jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda | grep -A5 -B5 "requirements"Repository: opendatahub-io/notebooks
Length of output: 1527
🏁 Script executed:
#!/bin/bash
# Check for pylock.toml in tensorflow directory
ls -la ./jupyter/tensorflow/ubi9-python-3.12/ | grep -E "(pylock|requirements)"Repository: opendatahub-io/notebooks
Length of output: 139
🏁 Script executed:
#!/bin/bash
# Check if requirements.cuda.txt exists and read first 30 lines
if [ -f ./jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt ]; then
echo "=== requirements.cuda.txt ==="
head -30 ./jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
fiRepository: opendatahub-io/notebooks
Length of output: 3100
🏁 Script executed:
#!/bin/bash
# Check for pylock.toml
if [ -f ./jupyter/tensorflow/ubi9-python-3.12/pylock.toml ]; then
echo "pylock.toml exists"
head -20 ./jupyter/tensorflow/ubi9-python-3.12/pylock.toml
else
echo "pylock.toml NOT found in tensorflow/ubi9-python-3.12"
fiRepository: opendatahub-io/notebooks
Length of output: 120
--no-verify-hashes weakens supply chain integrity (CWE-494: Download of Code Without Integrity Check).
The requirements.cuda.txt file contains hashes for all packages and all requirements are pinned to exact versions. Replace --no-verify-hashes with --require-hashes to enforce hash verification at install time, even though Cachi2 performs verification at prefetch. This provides defense-in-depth against cache mount tampering.
Proposed fix
UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \
--strict --no-deps --no-config --no-progress \
--compile-bytecode --index-strategy=unsafe-best-match \
- --no-verify-hashes \
+ --require-hashes \
--find-links /cachi2/output/deps/pip \
--requirements=./requirements.txt📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \ | |
| --strict --no-deps --no-config --no-progress \ | |
| --require-hashes --compile-bytecode --index-strategy=unsafe-best-match \ | |
| --requirements=./pylock.toml | |
| ### END Install software and packages | |
| --compile-bytecode --index-strategy=unsafe-best-match \ | |
| --no-verify-hashes \ | |
| --find-links /cachi2/output/deps/pip \ | |
| --requirements=./requirements.txt | |
| # setup path for runtime configuration | |
| UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \ | |
| --strict --no-deps --no-config --no-progress \ | |
| --compile-bytecode --index-strategy=unsafe-best-match \ | |
| --require-hashes \ | |
| --find-links /cachi2/output/deps/pip \ | |
| --requirements=./requirements.txt | |
| # setup path for runtime configuration |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` around lines 168
- 175, The uv pip install invocation currently uses the flag --no-verify-hashes
which disables pip's hash checking; change this to --require-hashes so pip
enforces package hashes at install time (defense-in-depth even with Cachi2
prefetch). Update the UV_NO_CACHE... uv pip install command (the line with
--no-verify-hashes) to use --require-hashes and ensure requirements.cuda.txt
contains the matching hashes for all entries referenced by the
--requirements=./requirements.txt argument.
d96fa1c to
429c34e
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml (1)
90-105: Task-level vs step-level computeResources: intentional difference in approach.
clair-scanandecosystem-cert-preflight-checksuse task-levelcomputeResources(applies to all steps), whilebuild-imagesusesstepSpecswith step-specific resources. Both are valid Tekton v1 patterns, but the inconsistency may complicate future maintenance.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml around lines 90 - 105, The pipeline uses mixed resource declarations which is inconsistent: pipeline tasks "clair-scan" and "ecosystem-cert-preflight-checks" declare computeResources at the task level while "build-images" uses stepSpecs with per-step resources; pick one consistent pattern and update the other tasks to match—either convert "clair-scan" and "ecosystem-cert-preflight-checks" to use stepSpecs with per-step computeResources (mirroring build-images) or move build-images step-specific resources to a task-level computeResources block—ensure you update the pipelineTaskName entries ("clair-scan", "ecosystem-cert-preflight-checks", "build-images") accordingly and keep resource requests/limits equivalent during the migration.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
@.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml:
- Around line 90-105: The pipeline uses mixed resource declarations which is
inconsistent: pipeline tasks "clair-scan" and "ecosystem-cert-preflight-checks"
declare computeResources at the task level while "build-images" uses stepSpecs
with per-step resources; pick one consistent pattern and update the other tasks
to match—either convert "clair-scan" and "ecosystem-cert-preflight-checks" to
use stepSpecs with per-step computeResources (mirroring build-images) or move
build-images step-specific resources to a task-level computeResources
block—ensure you update the pipelineTaskName entries ("clair-scan",
"ecosystem-cert-preflight-checks", "build-images") accordingly and keep resource
requests/limits equivalent during the migration.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: bb1ab961-c640-42b2-b788-839d4541f321
📒 Files selected for processing (8)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yamljupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cudajupyter/tensorflow/ubi9-python-3.12/prefetch-inputjupyter/tensorflow/ubi9-python-3.12/pyproject.tomljupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txtjupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
✅ Files skipped from review due to trivial changes (3)
- jupyter/tensorflow/ubi9-python-3.12/prefetch-input
- jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
- jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (3)
- .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
- jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda
- jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda
429c34e to
cc6b650
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda (2)
160-175:pylock.tomlis dead input in this stage.The install path consumes
./requirements.txt; nothing here reads./pylock.toml. Keeping the copied lockfile and “from lockfile” comments makes the build look stricter than it is.Proposed fix
-# Install Python packages and Jupyterlab extensions from lockfile (requirements.cuda.txt used only for Cachi2 prefetch) -COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml +# Install Python packages and JupyterLab extensions from the fully pinned requirements file COPY ${TENSORFLOW_SOURCE_CODE}/requirements.${PYLOCK_FLAVOR}.txt ./requirements.txt @@ -# Install Python packages from lockfile (hermetic: use Cachi2 prefetched pip deps) -# All dependencies are explicitly listed in pylock.toml (--no-deps) +# Install Python packages from the fully pinned requirements file (hermetic: use Cachi2 prefetched pip deps) +# All dependencies are explicitly listed in requirements.txt (--no-deps)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda` around lines 160 - 175, The Dockerfile stage copies pylock.toml but the RUN uses only ./requirements.txt, so remove the dead input and misleading comment: delete the COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml line and update the preceding comment text in the RUN block to reflect that pip installs come only from requirements.${PYLOCK_FLAVOR}.txt (or, alternatively, wire pylock.toml into the UV pip command if you intend to use the lockfile); check the COPY of requirements.${PYLOCK_FLAVOR}.txt and the RUN block (UV pip install --requirements=./requirements.txt) when making the change to keep artifacts consistent.
61-61: Pin the bootstrap toolchain versions to match the lockfile.The unpinned
micropipenvanduvpackages should reference the versions already tracked inuv.lock.d/pylock.cuda.toml. While--no-index --find-linkstypically resolves to a single prefetched wheel in hermetic builds, explicitly pinning these versions in the Dockerfile aligns with the documented principle that "every package is pinned by URL + SHA-256 checksum in committed lockfiles" (per the hermetic-guide.md). This ensures the bootstrap step is reproducible and auditable.Proposed fix
-RUN pip install --no-cache-dir --no-index --find-links /cachi2/output/deps/pip "micropipenv[toml]" "uv" +RUN pip install --no-cache-dir --no-index --find-links /cachi2/output/deps/pip \ + "micropipenv[toml]==1.10.0" \ + "uv==0.11.3"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda` at line 61, The RUN pip install line in Dockerfile.cuda installs unpinned bootstrap tools "micropipenv[toml]" and "uv"; update that command to pin both packages to the exact wheel URLs+hashes (or explicit versions) recorded in the lockfile uv.lock.d/pylock.cuda.toml so the bootstrap step is reproducible and auditable—replace the loose package specs in the RUN pip install command with the matching entries from uv.lock.d/pylock.cuda.toml (use the exact URL and SHA-256 or the exact version spec that appears in the lockfile for micropipenv and uv).jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda (2)
158-173:pylock.tomlis copied but never used in this stage.This stage installs from
./requirements.txt, not from./pylock.toml. Keeping the unused lockfile copy and the “from lockfile” comments here is misleading when debugging provenance or reproducibility.Proposed fix
-# Install Python packages and Jupyterlab extensions from lockfile (requirements.cuda.txt used only for Cachi2 prefetch) -COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml +# Install Python packages and JupyterLab extensions from the fully pinned requirements file COPY ${TENSORFLOW_SOURCE_CODE}/requirements.${PYLOCK_FLAVOR}.txt ./requirements.txt @@ -# Install Python packages from lockfile (hermetic: use Cachi2 prefetched pip deps) -# All dependencies are explicitly listed in pylock.toml (--no-deps) +# Install Python packages from the fully pinned requirements file (hermetic: use Cachi2 prefetched pip deps) +# All dependencies are explicitly listed in requirements.txt (--no-deps)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` around lines 158 - 173, The Dockerfile copies pylock.toml but never uses it (the RUN uses ./requirements.txt and the comment "from lockfile" is misleading); either remove the COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml line and update the surrounding comment to reflect that installation uses requirements.txt, or change the UV pip invocation (the UV pip install command) to consume the lockfile (e.g., pass the appropriate pylock/lockfile option) so pylock.toml is actually used; update or remove the "from lockfile" text accordingly and ensure only the relevant file (pylock.toml or requirements.txt) is copied and referenced.
61-61: Pin the bootstrap toolchain versions for reproducible hermetic builds.Line 61 installs
micropipenvanduvby name only, even though exact versions (1.10.0 and 0.11.3) are recorded injupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml. With--no-index --find-links, pip will select from available wheels in/cachi2/output/deps/pip. If the cache contains multiple versions, rebuilds can silently switch the installer toolchain, compromising build reproducibility.Proposed fix
-RUN pip install --no-cache-dir --no-index --find-links /cachi2/output/deps/pip "micropipenv[toml]" "uv" +RUN pip install --no-cache-dir --no-index --find-links /cachi2/output/deps/pip \ + "micropipenv[toml]==1.10.0" \ + "uv==0.11.3"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` at line 61, The pip install in the Dockerfile.konflux.cuda currently installs "micropipenv" and "uv" without versions which can lead to nondeterministic selection from /cachi2/output/deps/pip; update the RUN pip install line to pin micropipenv==1.10.0 and uv==0.11.3 (the versions recorded in uv.lock.d/pylock.cuda.toml) so the builder selects the exact wheel; ensure the pinned versions match the lockfile entries and keep the existing --no-index --find-links flags.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
@.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml:
- Around line 50-62: The pipeline prefetch inputs omit the RHDS RPM source
referenced by the Dockerfiles, so add a hermetic input entry for
prefetch-input/rhds to the prefetch-input list to ensure the RHDS rpms.lock.yaml
is included; specifically, update the prefetch-input value to include an entry
with path: prefetch-input/rhds and type: rpm (matching how other RPM inputs are
declared) so jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda and
Dockerfile.konflux.cuda cache-busting on prefetch-input/rhds/rpms.lock.yaml is
covered.
---
Nitpick comments:
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda`:
- Around line 160-175: The Dockerfile stage copies pylock.toml but the RUN uses
only ./requirements.txt, so remove the dead input and misleading comment: delete
the COPY ${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml
./pylock.toml line and update the preceding comment text in the RUN block to
reflect that pip installs come only from requirements.${PYLOCK_FLAVOR}.txt (or,
alternatively, wire pylock.toml into the UV pip command if you intend to use the
lockfile); check the COPY of requirements.${PYLOCK_FLAVOR}.txt and the RUN block
(UV pip install --requirements=./requirements.txt) when making the change to
keep artifacts consistent.
- Line 61: The RUN pip install line in Dockerfile.cuda installs unpinned
bootstrap tools "micropipenv[toml]" and "uv"; update that command to pin both
packages to the exact wheel URLs+hashes (or explicit versions) recorded in the
lockfile uv.lock.d/pylock.cuda.toml so the bootstrap step is reproducible and
auditable—replace the loose package specs in the RUN pip install command with
the matching entries from uv.lock.d/pylock.cuda.toml (use the exact URL and
SHA-256 or the exact version spec that appears in the lockfile for micropipenv
and uv).
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda`:
- Around line 158-173: The Dockerfile copies pylock.toml but never uses it (the
RUN uses ./requirements.txt and the comment "from lockfile" is misleading);
either remove the COPY
${TENSORFLOW_SOURCE_CODE}/uv.lock.d/pylock.${PYLOCK_FLAVOR}.toml ./pylock.toml
line and update the surrounding comment to reflect that installation uses
requirements.txt, or change the UV pip invocation (the UV pip install command)
to consume the lockfile (e.g., pass the appropriate pylock/lockfile option) so
pylock.toml is actually used; update or remove the "from lockfile" text
accordingly and ensure only the relevant file (pylock.toml or requirements.txt)
is copied and referenced.
- Line 61: The pip install in the Dockerfile.konflux.cuda currently installs
"micropipenv" and "uv" without versions which can lead to nondeterministic
selection from /cachi2/output/deps/pip; update the RUN pip install line to pin
micropipenv==1.10.0 and uv==0.11.3 (the versions recorded in
uv.lock.d/pylock.cuda.toml) so the builder selects the exact wheel; ensure the
pinned versions match the lockfile entries and keep the existing --no-index
--find-links flags.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: c8d8f097-19b1-4e7b-9f90-f988c93dfb61
📒 Files selected for processing (8)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yamljupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cudajupyter/tensorflow/ubi9-python-3.12/prefetch-inputjupyter/tensorflow/ubi9-python-3.12/pyproject.tomljupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txtjupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
✅ Files skipped from review due to trivial changes (3)
- jupyter/tensorflow/ubi9-python-3.12/prefetch-input
- jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
- jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
🚧 Files skipped from review as they are similar to previous changes (1)
- .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
cc6b650 to
78118a8
Compare
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml (1)
50-62:⚠️ Potential issue | 🟠 MajorAdd the missing
prefetch-input/rhdsRPM source.This PipelineRun still does not declare
prefetch-input/rhds, even though both CUDA Dockerfiles bust cache onprefetch-input/rhds/rpms.lock.yaml. That leaves the RHDS RPM set outside the hermetic input contract and can still force a network fetch or reuse stale data.Proposed fix
- name: prefetch-input value: - path: prefetch-input/mongocli type: gomod - path: prefetch-input/odh type: rpm - path: prefetch-input/odh type: generic + - path: prefetch-input/rhds + type: rpm - path: jupyter/tensorflow/ubi9-python-3.12 type: pip binary: arch: x86_64 requirements_files: [requirements.cuda.txt]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml around lines 50 - 62, The prefetch-input list is missing the rhds RPM source causing RHDS rpms.lock.yaml cache busts to be unaccounted for; add a new entry matching the other RPM entries by inserting an item with path: prefetch-input/rhds and type: rpm into the prefetch-input value array (alongside prefetch-input/mongocli and prefetch-input/odh) so the RHDS RPM set is declared in the PipelineRun inputs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In
@.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml:
- Around line 50-62: The prefetch-input list is missing the rhds RPM source
causing RHDS rpms.lock.yaml cache busts to be unaccounted for; add a new entry
matching the other RPM entries by inserting an item with path:
prefetch-input/rhds and type: rpm into the prefetch-input value array (alongside
prefetch-input/mongocli and prefetch-input/odh) so the RHDS RPM set is declared
in the PipelineRun inputs.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: fe419d87-6842-46ae-9423-658e00806ba5
📒 Files selected for processing (8)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yamljupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cudajupyter/tensorflow/ubi9-python-3.12/prefetch-inputjupyter/tensorflow/ubi9-python-3.12/pyproject.tomljupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txtjupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
✅ Files skipped from review due to trivial changes (3)
- jupyter/tensorflow/ubi9-python-3.12/prefetch-input
- jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
- .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- jupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txt
- jupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
78118a8 to
f58feae
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda`:
- Around line 47-50: The RPM key import line using "rpm --import
/cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true" is silently
masking import failures; remove the "|| true" and replace this behavior with an
explicit existence check and failing import handling: ensure the Dockerfile's
rpm import step (the "rpm --import
/cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official" and the "rpm --import
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" commands) only skips when the key
file is truly absent (e.g., test for file presence) but fails the build if the
file exists and rpm returns an error, so import errors surface and stop the
build instead of being swallowed.
- Around line 170-175: The uv pip install invocation (the command using
UV_NO_CACHE... uv pip install) must explicitly target the system Python because
setting VIRTUAL_ENV alone is not sufficient; update the uv invocation (the uv
pip install command) to include the flags --python /usr/bin/python3.12 --system
so uv uses the base interpreter rather than expecting an active venv, keeping
the rest of the flags (--no-index, --strict, --no-deps, --no-config,
--no-progress, --compile-bytecode, --index-strategy=unsafe-best-match,
--no-verify-hashes, --find-links, --requirements) unchanged.
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda`:
- Around line 47-50: The Dockerfile currently silences rpm import failures with
"|| true" (affecting the rpm --import of RPM-GPG-KEY-CentOS-Official), which can
hide corrupted or unreadable keys; replace the suppression with an existence
check for the optional CentOS key and only skip import if the file truly doesn't
exist (e.g., test -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official
then run rpm --import), and remove "|| true" so that the mandatory import of
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release runs without suppression and fails
the build on error; also add a clear echo/log message before failing imports to
provide actionable failure context for RPM-GPG-KEY-CentOS-Official and
RPM-GPG-KEY-redhat-release.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited), Repository UI (inherited)
Review profile: CHILL
Plan: Pro Plus
Run ID: 1ccf47ef-07ab-4952-a824-f61e128fc4fc
📒 Files selected for processing (8)
.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-pull-request.yaml.tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yamljupyter/tensorflow/ubi9-python-3.12/Dockerfile.cudajupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cudajupyter/tensorflow/ubi9-python-3.12/prefetch-inputjupyter/tensorflow/ubi9-python-3.12/pyproject.tomljupyter/tensorflow/ubi9-python-3.12/requirements.cuda.txtjupyter/tensorflow/ubi9-python-3.12/uv.lock.d/pylock.cuda.toml
✅ Files skipped from review due to trivial changes (2)
- jupyter/tensorflow/ubi9-python-3.12/prefetch-input
- jupyter/tensorflow/ubi9-python-3.12/pyproject.toml
🚧 Files skipped from review as they are similar to previous changes (1)
- .tekton/odh-workbench-jupyter-tensorflow-cuda-py312-ubi9-odh-main-push.yaml
| # [HERMETIC] Import GPG keys for prefetched RPM verification. | ||
| # CentOS key imported only if prefetched (present in Dockerfile.cpu; may be absent in Konflux). | ||
| RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true | ||
| RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release |
There was a problem hiding this comment.
Stop swallowing RPM key import failures (CWE-345).
|| true turns a broken trust-root import into a silent success. If one of these prefetched key files is present but truncated, unreadable, or otherwise invalid, the build keeps going with an incomplete RPM verification setup.
Proposed fix
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official ]; then \
+ rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official; \
+ fi
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
@@
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9 || true
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official ]; then \
+ rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official; \
+ fi
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9 ]; then \
+ rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9; \
+ fi
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-releaseAs per coding guidelines, **: REVIEW PRIORITIES: 1. Security vulnerabilities (provide severity, exploit scenario, and remediation code).
Also applies to: 79-83
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda` around lines 47 - 50,
The RPM key import line using "rpm --import
/cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true" is silently
masking import failures; remove the "|| true" and replace this behavior with an
explicit existence check and failing import handling: ensure the Dockerfile's
rpm import step (the "rpm --import
/cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official" and the "rpm --import
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" commands) only skips when the key
file is truly absent (e.g., test for file presence) but fails the build if the
file exists and rpm returns an error, so import errors surface and stop the
build instead of being swallowed.
| UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \ | ||
| --strict --no-deps --no-config --no-progress \ | ||
| --require-hashes --compile-bytecode --index-strategy=unsafe-best-match \ | ||
| --requirements=./pylock.toml | ||
| ### END Install software and packages | ||
| --compile-bytecode --index-strategy=unsafe-best-match \ | ||
| --no-verify-hashes \ | ||
| --find-links /cachi2/output/deps/pip \ | ||
| --requirements=./requirements.txt |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the specific Dockerfile
fd -t f "Dockerfile.cuda" | grep -i "ubi9-python-3.12"Repository: opendatahub-io/notebooks
Length of output: 612
🏁 Script executed:
#!/bin/bash
# Examine the file at the specified lines
if [ -f "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda" ]; then
echo "=== File content around lines 170-175 ==="
sed -n '165,180p' "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda"
echo ""
echo "=== Check base image and full context ==="
head -20 "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda"
fiRepository: opendatahub-io/notebooks
Length of output: 1533
🏁 Script executed:
#!/bin/bash
# Search for other uv pip install patterns in the codebase to understand usage
rg "uv pip install" --type dockerfile -B 2 -A 2Repository: opendatahub-io/notebooks
Length of output: 102
🏁 Script executed:
#!/bin/bash
# Search for other uv pip install patterns without type restriction
rg "uv pip install" -B 2 -A 2 | head -100Repository: opendatahub-io/notebooks
Length of output: 10425
🏁 Script executed:
#!/bin/bash
# Check more context around the uv pip install command and see what comes after
sed -n '170,190p' "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda"Repository: opendatahub-io/notebooks
Length of output: 1145
🏁 Script executed:
#!/bin/bash
# Check if there's any explicit FROM statement and WORKDIR to understand the Python environment
grep -n "^FROM\|^WORKDIR\|^ENV.*PYTHON\|^RUN.*python" "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda" | head -30Repository: opendatahub-io/notebooks
Length of output: 589
🏁 Script executed:
#!/bin/bash
# Look for where BASE_IMAGE is set/defined
grep -r "BASE_IMAGE" jupyter/tensorflow/ubi9-python-3.12/ | head -20Repository: opendatahub-io/notebooks
Length of output: 644
🏁 Script executed:
#!/bin/bash
# Check if there's a build context or script that defines BASE_IMAGE
find . -name "*.py" -o -name "*.sh" | xargs grep -l "BASE_IMAGE.*ubi9.*3.12" | head -5Repository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed (no clone):
# Search for uv pip documentation about --system vs venv behavior
curl -s "https://docs.astral.sh/uv/pip/environments/" | grep -i "system\|venv\|virtual" | head -20
Length of output: 2234
🏁 Script executed:
#!/bin/bash
# Search for documentation or comments about uv behavior in the repo
rg "uv pip|--system|--python" -B 3 -A 3 | grep -i "virtual\|venv\|system\|default" | head -30Repository: opendatahub-io/notebooks
Length of output: 18839
🏁 Script executed:
#!/bin/bash
# Check the Dockerfile more completely to see the Python setup and where uv is installed
sed -n '1,100p' "jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda" | grep -n "python\|uv\|WORKDIR\|RUN\|FROM"Repository: opendatahub-io/notebooks
Length of output: 1015
🏁 Script executed:
#!/bin/bash
# Check if any other uv pip install commands in the codebase use --system or --python
rg "uv pip install.*--system|uv pip install.*--python" -A 2 -B 2Repository: opendatahub-io/notebooks
Length of output: 50
🏁 Script executed:
#!/bin/bash
# Check the base image Dockerfile to see if VIRTUAL_ENV or venv is set up there
find . -path "*/base-images/cuda/*/ubi9-python-3.12/Dockerfile*" | head -1 | xargs head -50Repository: opendatahub-io/notebooks
Length of output: 1204
🏁 Script executed:
#!/bin/bash
# Also check if there's any documentation about the build process or environment setup
grep -r "VIRTUAL_ENV\|venv\|--system" --include="*.md" --include="*.txt" | grep -i "uv\|python" | head -10Repository: opendatahub-io/notebooks
Length of output: 165
Explicitly target system Python for uv pip install.
uv requires an active virtual environment or explicit Python targeting by default. Setting VIRTUAL_ENV environment variable alone is insufficient—uv will not use it without activation or explicit flags. This command will fail or behave unpredictably if the base image doesn't implicitly provide a detected environment. Make the Python target explicit with --python /usr/bin/python3.12 --system.
Proposed fix
-UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --no-index \
+UV_NO_CACHE=true UV_LINK_MODE=copy UV_PREVIEW_FEATURES=pylock uv pip install --python /usr/bin/python3.12 --system --no-index \
--strict --no-deps --no-config --no-progress \
--compile-bytecode --index-strategy=unsafe-best-match \
--no-verify-hashes \
--find-links /cachi2/output/deps/pip \
--requirements=./requirements.txt🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.cuda` around lines 170 - 175,
The uv pip install invocation (the command using UV_NO_CACHE... uv pip install)
must explicitly target the system Python because setting VIRTUAL_ENV alone is
not sufficient; update the uv invocation (the uv pip install command) to include
the flags --python /usr/bin/python3.12 --system so uv uses the base interpreter
rather than expecting an active venv, keeping the rest of the flags (--no-index,
--strict, --no-deps, --no-config, --no-progress, --compile-bytecode,
--index-strategy=unsafe-best-match, --no-verify-hashes, --find-links,
--requirements) unchanged.
| # [HERMETIC] Import GPG keys for prefetched RPM verification. | ||
| # CentOS key imported only if prefetched (present in Dockerfile.cpu; may be absent in Konflux). | ||
| RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true | ||
| RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release |
There was a problem hiding this comment.
Stop swallowing RPM key import failures (CWE-345).
|| true hides both the expected “key not present” case and the bad cases: corrupted key material, unreadable files, or rpm import errors. That lets the build continue with a silently incomplete trust bootstrap before hermetic RPM installs.
Proposed fix
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official ]; then \
+ rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official; \
+ fi
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
@@
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official || true
-RUN rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9 || true
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official ]; then \
+ rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official; \
+ fi
+RUN if [ -f /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9 ]; then \
+ rpm --import /cachi2/output/deps/generic/RPM-GPG-KEY-EPEL-9; \
+ fi
RUN rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-releaseAs per coding guidelines, **: REVIEW PRIORITIES: 1. Security vulnerabilities (provide severity, exploit scenario, and remediation code).
Also applies to: 79-83
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@jupyter/tensorflow/ubi9-python-3.12/Dockerfile.konflux.cuda` around lines 47
- 50, The Dockerfile currently silences rpm import failures with "|| true"
(affecting the rpm --import of RPM-GPG-KEY-CentOS-Official), which can hide
corrupted or unreadable keys; replace the suppression with an existence check
for the optional CentOS key and only skip import if the file truly doesn't exist
(e.g., test -f /cachi2/output/deps/generic/RPM-GPG-KEY-CentOS-Official then run
rpm --import), and remove "|| true" so that the mandatory import of
/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release runs without suppression and fails
the build on error; also add a clear echo/log message before failing imports to
provide actionable failure context for RPM-GPG-KEY-CentOS-Official and
RPM-GPG-KEY-redhat-release.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jiridanek The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
f58feae to
a55db07
Compare
|
New changes are detected. LGTM label has been removed. |
- Hermetic Dockerfile.cuda / Dockerfile.konflux.cuda: Cachi2 gomod mongocli, prefetched RPMs, DNF openshift-clients, uv pip install --no-index from requirements.cuda.txt. - Symlink jupyter/tensorflow/ubi9-python-3.12/prefetch-input → repo prefetch-input. - Tekton push/PR: hermetic: true, prefetch-input params, amd64 m4xlarge, build/clair/ecosystem resourcing updates.
a55db07 to
c68d38a
Compare
|
Merged too soon, caused build pipelin to be cancelled, but the actual build-image was success:
|
RHAIENG-2852: Hermetic build for Jupyter tensorflow CUDA
Description
How Has This Been Tested?
Self checklist (all need to be checked):
make test(gmakeon macOS) before asking for reviewDockerfile.konfluxfiles should be done inodh/notebooksand automatically synced torhds/notebooks. For Konflux-specific changes, modifyDockerfile.konfluxfiles directly inrhds/notebooksas these require special attention in the downstream repository and flow to the upcoming RHOAI release.Merge criteria:
Summary by CodeRabbit
Chores
New Features