-
Notifications
You must be signed in to change notification settings - Fork 6
from-307-dockerize-the-best-run-from-HPO-study #309
Copy link
Copy link
Open
Labels
audience/technicalIssue primarily for technical review and service.Issue primarily for technical review and service.kind/cicdCICD, dev ops, platform ops, etcCICD, dev ops, platform ops, etckind/performancekind/production-and-commercializationTasks for commercial distribution PaaS / SaaS and scale.Tasks for commercial distribution PaaS / SaaS and scale.kind/text-generative-aiDevelopment of generative AI capabilitiesDevelopment of generative AI capabilitiesstatus/ready-pending-testsReady to make pull request once tests pass.Ready to make pull request once tests pass.triage/high-priority
Metadata
Metadata
Assignees
Labels
audience/technicalIssue primarily for technical review and service.Issue primarily for technical review and service.kind/cicdCICD, dev ops, platform ops, etcCICD, dev ops, platform ops, etckind/performancekind/production-and-commercializationTasks for commercial distribution PaaS / SaaS and scale.Tasks for commercial distribution PaaS / SaaS and scale.kind/text-generative-aiDevelopment of generative AI capabilitiesDevelopment of generative AI capabilitiesstatus/ready-pending-testsReady to make pull request once tests pass.Ready to make pull request once tests pass.triage/high-priority
TLDR Package the settings from best run in Docker
Issue
We have found the optimal model on the HPO srudy done on branch 307. We need to package this run in a suitable GPU enabled container to scale it up and control the dependencies.
Task
Make this run in tensorflow/tensorflow:2.20.0-gpu
To DO: