Skip to content

feat: separate all and default workspace scopes#2019

Open
yottahmd wants to merge 32 commits intomainfrom
workspace-scoped-access
Open

feat: separate all and default workspace scopes#2019
yottahmd wants to merge 32 commits intomainfrom
workspace-scoped-access

Conversation

@yottahmd
Copy link
Copy Markdown
Collaborator

@yottahmd yottahmd commented Apr 20, 2026

Summary

  • add explicit workspace scopes for all data, default data, and named workspaces in the API
  • update the global workspace selector and workspace-aware pages to use explicit scope queries instead of overloaded empty workspace values
  • keep creation and document mutations out of the all scope, while preserving legacy workspace query compatibility
  • make E2E helpers seed the persisted default workspace scope and create users with explicit all-workspace access

Testing

  • make fmt
  • go test ./internal/service/frontend/api/v1
  • go test ./internal/persis/filedag ./internal/persis/filedagrun ./internal/core/exec
  • pnpm typecheck
  • pnpm test
  • pnpm build
  • pnpm exec vitest run src/lib/tests/workspace.test.ts src/features/search/components/tests/SearchResult.test.tsx
  • pnpm test:e2e dag-crud.spec.ts

Summary by CodeRabbit

  • New Features

    • Workspace scoping: new workspaceScope/workspace query parameters, per-item Workspace shown for DAGs, DAG-runs and docs, workspace-scoped user & API key access, and a Workspace Access editor in the UI.
  • Behavior Changes

    • Listings, searches, feeds, SSE, queues, links, doc flows and execution/write operations now respect workspace scope and enforce per-workspace permissions; mutation APIs accept explicit mutation scope.
  • Tests

    • Expanded unit, integration and E2E tests for workspace parsing, validation, filtering, authorization and persistence.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 20, 2026

Note

Currently processing new changes in this PR. This may take a few minutes, please wait...

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a7df7555-584a-49f9-a587-7b794bf1cb75

📥 Commits

Reviewing files that changed from the base of the PR and between f072546 and 2406276.

📒 Files selected for processing (26)
  • api/v1/api.gen.go
  • api/v1/api.yaml
  • internal/service/frontend/api/v1/docs.go
  • internal/service/frontend/api/v1/docs_test.go
  • internal/service/frontend/api/v1/search.go
  • internal/service/frontend/api/v1/workspace_access.go
  • internal/service/frontend/api/v1/workspace_access_test.go
  • ui/src/api/v1/schema.ts
  • ui/src/contexts/AuthContext.tsx
  • ui/src/features/agent/components/DAGPicker.tsx
  • ui/src/features/agent/components/DocPicker.tsx
  • ui/src/features/agent/components/__tests__/WorkspaceScopedPickers.test.tsx
  • ui/src/features/dags/components/dag-details/DAGDetailsContent.tsx
  • ui/src/features/dags/components/dag-details/DAGHeader.tsx
  • ui/src/features/dags/components/dag-editor/DAGEditButtons.tsx
  • ui/src/features/dags/components/dag-editor/DAGSpec.tsx
  • ui/src/features/search/components/SearchResult.tsx
  • ui/src/features/search/components/__tests__/SearchResult.test.tsx
  • ui/src/hooks/useDocSSE.ts
  • ui/src/lib/__tests__/workspace.test.ts
  • ui/src/lib/__tests__/workspaceAccess.test.ts
  • ui/src/lib/workspace.ts
  • ui/src/lib/workspaceAccess.ts
  • ui/src/pages/dags/dag/index.tsx
  • ui/src/pages/docs/components/DocEditor.tsx
  • ui/src/pages/docs/index.tsx

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Add workspace-scoped access controls, request parameters, filtering, response metadata, validation, and UI workspace selection/editing; propagate workspace enforcement into stores, services, auth models, and tests.

Changes

Cohort / File(s) Summary
API schema & frontend types
api/v1/api.yaml, ui/src/api/v1/schema.ts
Add WorkspaceName/WorkspaceGrant/WorkspaceAccess schemas, WorkspaceScope & WorkspaceMutationScope enums, workspaceScope/workspace parameters, and optional workspace fields on many response types; require workspaceAccess on User/APIKey payloads.
Auth model & persistence
internal/auth/workspace_access.go, internal/auth/workspace_access_test.go, internal/auth/user.go, internal/auth/apikey.go
Introduce WorkspaceAccess/WorkspaceGrant types and utilities (Normalize/Clone/Validate/EffectiveRole/HasWorkspaceAccess); add WorkspaceAccess to User and APIKey and persist via cloned copies; unit tests added.
Service auth wiring
internal/service/auth/service.go, internal/service/frontend/auth/middleware.go, internal/service/oidcprovision/service.go
Create/update inputs accept WorkspaceAccess; validate via auth.ValidateWorkspaceAccess; clone before persisting; middleware/OIDC provisioning populate WorkspaceAccess on synthetic users.
Frontend API workspace helpers & tests
internal/service/frontend/api/v1/workspace_access.go, internal/service/frontend/api/v1/workspace_access_test.go
Parse/validate workspace scope params, convert API↔auth models, build exec.WorkspaceFilter and require-* helpers; tests for parsing/param handling.
Endpoint enforcement & transforms
internal/service/frontend/api/v1/*.go, internal/service/frontend/api/v1/transformer.go
Replace global permission checks with per-workspace checks, enforce workspace visibility on reads and mutations, validate/persist workspaceAccess in user/API key flows, and include derived workspace in API responses.
Core exec workspace utilities
internal/core/exec/workspace.go, internal/core/exec/dag.go, internal/core/exec/dagrun.go, internal/core/exec/context.go, internal/core/exec/workspace_test.go
Add label-to-workspace extraction, WorkspaceLabelState, WorkspaceFilter with MatchesLabels, new options fields for workspace filtering, docs-dir derivation by workspace, and tests.
Persistence: DAG listing/search & tests
internal/persis/filedag/store.go, internal/persis/filedag/store_test.go
Apply WorkspaceFilter in index and scan paths, incorporate workspace into cursor scope key, conditionally load metadata to populate result.Workspace; tests updated/added.
Persistence: DAG-run statuses & pagination
internal/persis/filedagrun/store.go, internal/persis/filedagrun/pagination.go, internal/persis/filedagrun/store_test.go
resolveStatus now takes WorkspaceFilter and enforces workspace label matching on fast and standard paths; iterator/pagination wiring updated and tests adjusted.
Docs store & mutation scoping
internal/agent/doc.go, internal/persis/filedoc/store.go, internal/persis/filedoc/store_test.go, internal/service/frontend/api/v1/docs_test.go
Add ExcludePathRoots to agent List options, forward exclusions before pagination, implement read vs mutation workspace scoping and validation; tests updated.
Queues & run summaries
internal/service/frontend/api/v1/queues.go
Queue collection/counting enforces workspace visibility, skips inaccessible runs, counts only visible queued items, and treats inaccessible summaries as skippable.
Coordinator stale-lease repair & tests
internal/service/coordinator/handler.go, internal/service/coordinator/handler_test.go
Add heartbeat-triggered stale-lease repair path and eligibility/restore logic; tests for repair and non-repair cases added.
Fileservice registry & file utils
internal/persis/fileserviceregistry/registry.go, internal/persis/fileserviceregistry/types.go
Delay instance-file removal until heartbeats stop; use fileutil atomic read/write/remove helpers for robustness.
Runtime manager & tests
internal/runtime/manager.go, internal/runtime/manager_test.go
Add ManagerOption clock injection (WithManagerClock), centralize nowFunc usage, and add run-level heartbeat check before stale-run repair; tests adapted.
Frontend workspace model & helpers
ui/src/lib/workspace.ts, ui/src/lib/workspaceAccess.ts, ui/src/contexts/*
Add WorkspaceSelection model, storage migration, key/query builders, visible-document helpers, normalize/role helpers, and context updates to carry selection.
Frontend components, hooks & pages
ui/src/components/*, ui/src/features/*, ui/src/pages/*, ui/src/hooks/*
Add WorkspaceAccessEditor; thread workspaceSelection/workspaceQuery through UI (selectors, lists, SSE, editors, search, tabs); replace label-based queries with structured workspace queries; update many UI tests and e2e flows.
Admin UI forms
ui/src/pages/users/*, ui/src/pages/api-keys/*
Add workspace-access editor/state, client validation requiring grants when scoped, and enforce role→viewer when workspace-scoped.
Docs/tab UX & editors
ui/src/contexts/DocTabContext.tsx, ui/src/pages/docs/*, ui/src/pages/docs/components/*
Doc tabs and editors now carry optional workspace; open/update tab APIs accept workspace; editors use workspace-scoped draft keys and mutation queries; mutation gating enforces mutability.
Tests & misc fixes
various internal/*_test.go, ui/src/**/__tests__/*, ui/e2e/*, internal/intg/*
Numerous unit/integration/e2e tests added/updated for parsing, filtering, authorization, storage migration, and small robustness changes (fileservice, runtime/agent).

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant API as rgba(70,130,180,0.5)
  participant Auth as rgba(34,139,34,0.5)
  participant Workspace as rgba(255,165,0,0.5)
  participant Store as rgba(128,0,128,0.5)

  Client->>API: Request (workspaceScope?, workspace?, action)
  API->>Auth: authenticate user, parse/validate workspace access
  Auth-->>API: user + WorkspaceAccess
  alt workspace validation required
    API->>Workspace: list/verify referenced workspaces
    Workspace-->>API: list / not-found
  end
  API->>Store: query with WorkspaceFilter derived from params + user access
  Store-->>API: filtered results (items include workspace metadata)
  API->>Client: 200 + payload or 400/403/404 on validation/authorization failure
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • PR #2015: Broad workspace-scoping implementation touching schema, store/query scope, label→workspace mapping, and UI workspace selection — substantial overlap.
  • PR #2017: Modifies DAG-run/DAG response mapping and summary logic; intersects with transformer and UI response fields.
  • PR #2010: Changes around DAG-run edit/retry authorization/control-flow that overlap with workspace-scoped execution checks.
✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch workspace-scoped-access

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (16)
ui/src/api/v1/schema.ts (1)

12946-12954: ⚠️ Potential issue | 🟠 Major

Don’t expose aggregate scope on document mutations.

These write endpoints now accept WorkspaceScope, so TypeScript allows workspaceScope=accessible even though mutations should remain outside aggregate scope. Use a narrower write-scope parameter such as none | workspace, or keep only the legacy workspace selector for mutations.

Update the OpenAPI source and regenerate this file instead of editing it directly. As per coding guidelines, “Frontend API types must be generated from OpenAPI spec via pnpm gen:api to maintain type safety with the backend.”

Illustrative generated shape after narrowing mutation scopes
-                workspaceScope?: components["parameters"]["WorkspaceScope"];
+                workspaceScope?: components["parameters"]["WorkspaceMutationScope"];
                 workspace?: components["parameters"]["Workspace"];
         WorkspaceScope: components["schemas"]["WorkspaceScope"];
+        /** `@description` Explicit workspace scope for write APIs: no-workspace data or one named workspace */
+        WorkspaceMutationScope: components["schemas"]["WorkspaceMutationScope"];
 export enum WorkspaceScope {
     accessible = "accessible",
     none = "none",
     workspace = "workspace"
 }
+
+export enum WorkspaceMutationScope {
+    none = "none",
+    workspace = "workspace"
+}

Also applies to: 13100-13108, 13145-13153, 13198-13206, 13260-13268

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/api/v1/schema.ts` around lines 12946 - 12954, The generated API type
for createDoc (and other mutation endpoints around the same areas) incorrectly
exposes the aggregate WorkspaceScope parameter; update the OpenAPI spec so
mutation endpoints use a narrower write-only scope (e.g., an enum limited to
"none" | "workspace" or remove WorkspaceScope and keep only the legacy workspace
parameter) instead of components["parameters"]["WorkspaceScope"], then
regenerate the TS types via the generator (run pnpm gen:api) so
createDoc.parameters.query and the similar mutation parameter groups reflect the
narrowed write-scope; do not edit ui/src/api/v1/schema.ts directly.
internal/persis/filedag/store.go (1)

841-867: ⚠️ Potential issue | 🟠 Major

Check workspace access before reading the DAG body.

SearchMatches currently reads the YAML contents before applying WorkspaceFilter. Move the metadata/workspace check above os.ReadFile to avoid unnecessary reads and out-of-scope existence/error side channels.

Proposed ordering fix
 	filePath, err := store.locateDAG(fileName)
 	if err != nil {
 		return nil, exec.ErrDAGNotFound
 	}
 
-	dat, err := os.ReadFile(filePath) //nolint:gosec
-	if err != nil {
-		if errors.Is(err, os.ErrNotExist) {
-			return nil, exec.ErrDAGNotFound
-		}
-		return nil, err
-	}
 	if len(opts.Labels) > 0 || opts.WorkspaceFilter != nil {
 		dag, err := spec.Load(ctx, filePath, store.defaultLoadOptions(
 			spec.OnlyMetadata(),
 			spec.WithoutEval(),
 			spec.SkipSchemaValidation(),
@@
 		if !opts.WorkspaceFilter.MatchesLabels(dag.Labels) {
 			return &exec.CursorResult[*exec.Match]{Items: []*exec.Match{}}, nil
 		}
 	}
+
+	dat, err := os.ReadFile(filePath) //nolint:gosec
+	if err != nil {
+		if errors.Is(err, os.ErrNotExist) {
+			return nil, exec.ErrDAGNotFound
+		}
+		return nil, err
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/persis/filedag/store.go` around lines 841 - 867, In SearchMatches,
avoid reading the DAG body before checking workspace/label access: after
locating the file with store.locateDAG (filePath), perform the metadata-only
load via spec.Load (using store.defaultLoadOptions with spec.OnlyMetadata(),
spec.WithoutEval(), spec.SkipSchemaValidation(), spec.WithAllowBuildErrors())
and run containsAllLabels(dag.Labels, opts.Labels) and
opts.WorkspaceFilter.MatchesLabels(dag.Labels) first; only if those checks pass
should you proceed to read the full file with os.ReadFile and continue normal
processing. Ensure you keep the same error handling (return exec.ErrDAGNotFound
for missing files) and preserve existing behavior for nil WorkspaceFilter or
empty Labels.
ui/src/menu.tsx (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Add the GPL header to this touched TSX source file.

Proposed header
+// Copyright (C) 2026 Yota Hamada
+// SPDX-License-Identifier: GPL-3.0-or-later
+
 import {

As per coding guidelines, "**/*.{go,ts,tsx,js}: Apply GPL v3 license headers on source files, managed via make addlicense."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/menu.tsx` at line 1, Add the GPL v3 license header to the top of the
modified TSX file (menu.tsx) so it matches the project's expected format
produced by make addlicense; insert the standard GPL header comment block before
the first import statement (the initial import { ... } line) so the header sits
at the very top of the file and ensure spelling/format and year/owner
placeholders match the project's license template.
internal/service/frontend/api/v1/queues.go (1)

287-296: ⚠️ Potential issue | 🟠 Major

Avoid exposing queues that have no visible items.

Line 295 creates a queue resource for every queue store name even when count == 0. For DAG-based queues, that can reveal inaccessible DAG/queue names to users who have no visible running or queued items in that queue.

Proposed fix
 		for _, queueName := range queueNames {
 			count, err := a.countVisibleQueuedItems(ctx, queueName)
 			if err != nil {
 				logger.Warn(ctx, "Failed to get queue length",
 					tag.Queue(queueName),
 					tag.Error(err))
 				continue
 			}
+			if count == 0 {
+				if _, exists := queueMap[queueName]; !exists {
+					continue
+				}
+			}
 			queue := getOrCreateQueue(queueMap, queueName, a.config)
 			queue.queuedCount = count
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/queues.go` around lines 287 - 296, The code
currently calls getOrCreateQueue and sets queue.queuedCount even when count ==
0, which can expose empty/inaccessible DAG-based queues; change the loop in the
handler that iterates queueNames so that after calling
a.countVisibleQueuedItems(ctx, queueName) you skip creating or adding the queue
to queueMap when count == 0 (only call getOrCreateQueue and set queuedCount when
count > 0), preserving the existing error handling for non-nil err from
countVisibleQueuedItems and using the same queueMap, getOrCreateQueue, and
queuedCount fields.
ui/src/pages/dags/dag/index.tsx (1)

138-150: ⚠️ Potential issue | 🟠 Major

Don’t treat unloaded or empty-label DAG metadata as no-workspace.

Line 138 falls back to tags only when labels is nullish, not when it is an empty array. Combined with Line 143, WorkspaceScope.none also matches before dagData has loaded, which can enable DAG-run requests for a workspace DAG under the no-workspace selection.

Proposed fix
-  const dagWorkspaceName = workspaceNameFromLabels(
-    dagData?.dag?.labels ?? dagData?.dag?.tags ?? []
-  );
+  const dagLabels = [
+    ...(dagData?.dag?.labels ?? []),
+    ...(dagData?.dag?.tags ?? []),
+  ];
+  const dagWorkspaceName = workspaceNameFromLabels(dagLabels);
+  const dagLoaded = Boolean(dagData?.dag);
   const dagMatchesWorkspace =
     workspaceSelection.scope === WorkspaceScope.accessible ||
-    (workspaceSelection.scope === WorkspaceScope.none && !dagWorkspaceName) ||
-    (workspaceSelection.scope === WorkspaceScope.workspace &&
-      dagWorkspaceName === workspaceSelection.workspace);
+    (dagLoaded &&
+      ((workspaceSelection.scope === WorkspaceScope.none && !dagWorkspaceName) ||
+        (workspaceSelection.scope === WorkspaceScope.workspace &&
+          dagWorkspaceName === workspaceSelection.workspace)));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/pages/dags/dag/index.tsx` around lines 138 - 150, dagWorkspaceName is
computed with dagData?.dag?.labels ?? dagData?.dag?.tags which treats an empty
labels array as present and hides tags, and dagMatchesWorkspace allows
WorkspaceScope.none to match before dagData is loaded; update the logic so
dagWorkspaceName prefers labels only when labels exists and has length (e.g.,
use labels.length > 0 ? labels : tags) and make dagMatchesWorkspace/
dagRunQueryEnabled require dagData?.dag to be loaded before treating the "none"
scope as a match (ensure you reference and gate with dagData?.dag or
dagWorkspaceName defined); adjust the expressions around dagWorkspaceName,
dagMatchesWorkspace, and dagRunQueryEnabled accordingly to prevent enabling
DAG-run requests for unloaded or mis-labeled DAGs.
ui/src/pages/docs/index.tsx (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Add the GPL license header.

Line 1 starts directly with imports, so this TSX source file is missing the required GPL v3 header. As per coding guidelines, **/*.{go,ts,tsx,js}: Apply GPL v3 license headers on source files, managed via make addlicense.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/pages/docs/index.tsx` at line 1, This TSX file (the one that currently
begins with "import { ChevronLeft } from 'lucide-react';") is missing the GPL v3
license header; add the standard GPL v3 header comment block at the very top of
the file (above all imports) consistent with the project's header format used by
make addlicense so the file containing the ChevronLeft import complies with the
repository's licensing policy.
internal/service/frontend/api/v1/docs.go (1)

282-316: ⚠️ Potential issue | 🟠 Major

Apply workspace visibility before pagination.

These paths paginate first, then filter hidden docs. Restricted users can see pagination totals/pages for inaccessible docs, and valid visible docs may be pushed onto later pages or omitted from the current page. Filter at the store/query layer, or build pagination metadata from the filtered result set.

Also applies to: 649-669

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/docs.go` around lines 282 - 316, The current
branches call a.docStore.ListFlat / a.docStore.List with opts and paginate the
raw result, then apply filterDocMetadataByWorkspace or filterDocTreeByWorkspace
which causes pagination/counts to include hidden items; change the flow to apply
workspace visibility before computing/returning pagination: either (preferred)
pass visibility into the store query via opts so ListFlat/List return
already-filtered results, or if changing the store API is not possible, filter
result.Items immediately after receiving the store response and then rebuild the
pagination metadata from that filtered slice (use the filtered length/items when
creating the Pagination passed to api.ListDocs200JSONResponse); apply this fix
in both the flat branch (affecting a.docStore.ListFlat, toDocMetadataResponse,
filterDocMetadataByWorkspace, toPagination) and the tree branch (affecting
a.docStore.List, toDocTreeResponse, filterDocTreeByWorkspace, toPagination) so
counts/pages reflect only visible docs.
internal/service/auth/service.go (1)

313-326: ⚠️ Potential issue | 🟠 Major

Validate candidate role and workspace access before mutating the cached user/API key object.

Lines 313-326 and 564-577 mutate the objects loaded from the store before validation. Since the stores use file-based caching that returns live pointers (not clones), validation failures leave the in-memory cached objects in a corrupted state. Subsequent calls to GetByID return the invalid state rather than reloading from disk.

Compute candidate role and workspace access in local variables, validate them together, then assign to the object only after validation succeeds.

Safer pattern for UpdateUser (apply same pattern to UpdateAPIKey)
-	if input.Role != nil {
+	nextRole := user.Role
+	if input.Role != nil {
 		if !input.Role.Valid() {
 			return nil, fmt.Errorf("invalid role: %s", *input.Role)
 		}
-		user.Role = *input.Role
+		nextRole = *input.Role
 	}
 
+	nextWorkspaceAccess := user.WorkspaceAccess
 	if input.WorkspaceAccess != nil {
-		user.WorkspaceAccess = auth.CloneWorkspaceAccess(input.WorkspaceAccess)
+		nextWorkspaceAccess = auth.CloneWorkspaceAccess(input.WorkspaceAccess)
 	}
 
-	if err := auth.ValidateWorkspaceAccess(user.Role, user.WorkspaceAccess, nil); err != nil {
+	if err := auth.ValidateWorkspaceAccess(nextRole, nextWorkspaceAccess, nil); err != nil {
 		return nil, err
 	}
+
+	user.Role = nextRole
+	user.WorkspaceAccess = nextWorkspaceAccess
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/auth/service.go` around lines 313 - 326, The code mutates
cached user/API key objects directly before validation; instead, in UpdateUser
(and symmetrically in UpdateAPIKey) compute candidateRole and
candidateWorkspaceAccess into local variables (e.g., candidateRole := user.Role
or *input.Role when provided, and candidateWorkspaceAccess :=
auth.CloneWorkspaceAccess(input.WorkspaceAccess) or user.WorkspaceAccess), call
auth.ValidateWorkspaceAccess(candidateRole, candidateWorkspaceAccess, nil) and
only after validation succeeds assign user.Role = candidateRole and
user.WorkspaceAccess = candidateWorkspaceAccess; apply the same
local-variable-then-validate-then-assign pattern wherever the code currently
writes to user.Role/user.WorkspaceAccess or apiKey.Role/apiKey.WorkspaceAccess
before validation.
internal/service/frontend/api/v1/dags.go (3)

895-954: ⚠️ Potential issue | 🔴 Critical

Authorize against the runtime workspace labels before starting or enqueueing.

These endpoints check dagWorkspaceName(dag) before parsing request labels, but startDAGRun/enqueueDAGRun later pass those labels into the run. A caller can supply a workspace label for a workspace they cannot execute and bypass the workspace check.

🔒 Proposed fix pattern
-	if err := a.requireExecuteForWorkspace(ctx, dagWorkspaceName(dag)); err != nil {
-		return nil, err
-	}
-
 	if err := buildErrorsToAPIError(dag.BuildErrors); err != nil {
 		return nil, err
 	}
@@
 	labels, err := extractLabelsParam(request.Body.Labels, request.Body.Tags)
 	if err != nil {
 		return nil, err
 	}
+	if err := a.requireExecuteForWorkspace(ctx, runtimeWorkspaceName(dag, labels)); err != nil {
+		return nil, err
+	}
 
 	if err := a.startDAGRun(ctx, dag, params, dagRunId, nameOverride, labels); err != nil {

Apply the same ordering to ExecuteDAGSync and EnqueueDAGDAGRun.

Also applies to: 999-1047, 1424-1483

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dags.go` around lines 895 - 954, The code
checks workspace execution permission with
requireExecuteForWorkspace(dagWorkspaceName(dag)) before parsing request labels,
which lets a caller supply a different workspace label in request.Body.Labels
and bypass authorization when those labels are later used by
startDAGRun/enqueueDAGRun; to fix, validate/authorize the effective workspace
labels before using them by extracting labels early (using extractLabelsParam or
equivalent) and calling requireExecuteForWorkspace for the workspace derived
from those labels (or ensure the workspace label cannot be overridden) in
ExecuteDAGSync and EnqueueDAGRun (and the other affected handlers), then proceed
with generate/validate dagRunId, singleton checks, ensureDAGRunIDUnique, and
finally call startDAGRun/enqueueDAGRun with the now-authorized labels.

1693-1708: ⚠️ Potential issue | 🔴 Critical

Add the missing workspace check to DAG history SSE.

GetDAGDAGRunHistory enforces requireWorkspaceVisible, but the SSE data method returns the same history without that check.

🔒 Proposed fix
 func (a *API) GetDAGHistoryData(ctx context.Context, fileName string) (any, error) {
+	dag, _ := a.dagStore.GetDetails(ctx, fileName, spec.WithAllowBuildErrors())
+	if err := a.requireWorkspaceVisible(ctx, dagWorkspaceName(dag)); err != nil {
+		return nil, err
+	}
 	dagName := a.resolveDAGName(ctx, fileName)
 	recentHistory := a.dagRunMgr.ListRecentStatus(ctx, dagName, defaultHistoryLimit)
@@
-	dag, _ := a.dagStore.GetDetails(ctx, fileName, spec.WithAllowBuildErrors())
-
 	gridData := a.readHistoryData(ctx, dag, recentHistory)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dags.go` around lines 1693 - 1708,
GetDAGHistoryData is missing the same workspace visibility guard used by
GetDAGDAGRunHistory; after fetching dag via a.dagStore.GetDetails (or
immediately after resolving dagName) call a.requireWorkspaceVisible(ctx, dag)
and return the appropriate error if it fails, before calling a.readHistoryData
and returning the DAG run payload; ensure you reference the existing
requireWorkspaceVisible method so the SSE endpoint enforces workspace access
identical to the non-SSE handler.

833-880: ⚠️ Potential issue | 🟠 Major

Authorize the actual DAG-run workspace before returning run details.

This endpoint checks the current DAG metadata workspace, but historical runs can carry their own workspace label. If a DAG was relabeled, users with access to the current workspace can read older runs from a different workspace.

🔒 Proposed fix pattern
 	attempt, err := a.dagRunStore.FindAttempt(ctx, exec.NewDAGRunRef(dag.Name, dagRunId))
 	if err != nil {
@@
 		return nil, fmt.Errorf("error getting DAG run attempt: %w", err)
 	}
+	runWorkspaceName, err := workspaceNameForAttempt(ctx, attempt)
+	if err != nil {
+		return nil, err
+	}
+	if err := a.requireWorkspaceVisible(ctx, runWorkspaceName); err != nil {
+		return nil, err
+	}
 
 	dagStatus, err := a.dagRunMgr.GetCurrentStatus(ctx, dag, dagRunId)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dags.go` around lines 833 - 880, The handler
returns DAG-run details using attempt (from a.dagRunStore.FindAttempt /
LatestAttempt) without verifying the run's own workspace label; ensure you
authorize access against the actual run workspace before returning results by
extracting the workspace label from the retrieved attempt (the DAG run object
returned by FindAttempt/LatestAttempt) and calling the same
workspace-authorization routine used for the DAG metadata (e.g., the existing
authorize / authorizeWorkspace helper) prior to calling
a.dagRunMgr.GetCurrentStatus / a.dagRunMgr.GetLatestStatus and before building
the response with a.toDAGRunDetailsWithSpecSource so historical runs in a
different workspace are denied to unauthorized users.
internal/service/frontend/api/v1/dagruns.go (5)

3761-3865: ⚠️ Potential issue | 🔴 Critical

Apply workspace filtering to the DAG-runs SSE list path.

The HTTP list endpoints add exec.WithWorkspaceFilter, but GetDAGRunsListData still lists from query params without deriving a workspace filter. Scoped users can receive DAG runs outside their workspace through the SSE data path.

🔒 Proposed fix
 func (a *API) GetDAGRunsListData(ctx context.Context, queryString string) (any, error) {
-	opts, err := dagRunListOptionsFromQueryString(ctx, queryString)
+	opts, err := a.dagRunListOptionsFromQueryString(ctx, queryString)
 	if err != nil {
 		return nil, err
 	}
@@
-func dagRunListOptionsFromQueryString(ctx context.Context, queryString string) (dagRunListOptions, error) {
+func (a *API) dagRunListOptionsFromQueryString(ctx context.Context, queryString string) (dagRunListOptions, error) {
 	params, err := url.ParseQuery(queryString)
 	if err != nil {
 		logger.Warn(ctx, "Failed to parse query string for DAG runs list",
@@
-	return buildDAGRunListOptions(dagRunListFilterInput{
+	opts := buildDAGRunListOptions(dagRunListFilterInput{
 		statuses: statusValues,
 		fromDate: fromDate,
 		toDate:   toDate,
 		name:     name,
 		dagRunID: dagRunID,
 		labels:   labels,
 		limit:    limit,
 		cursor:   cursor,
-	}), nil
+	})
+	scopeParam, workspaceParam := workspaceScopeParamsFromValues(params)
+	workspaceFilter, err := a.workspaceFilterForParams(ctx, scopeParam, workspaceParam)
+	if err != nil {
+		return dagRunListOptions{}, err
+	}
+	opts.query = append(opts.query, exec.WithWorkspaceFilter(workspaceFilter))
+	return opts, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns.go` around lines 3761 - 3865,
GetDAGRunsListData/dagRunListOptionsFromQueryString currently build list options
from query params but never apply the workspace filter, allowing scoped users to
receive DAG runs outside their workspace; fix by deriving the workspace filter
from ctx (same way HTTP list endpoints do) and include exec.WithWorkspaceFilter
in the options passed to a.dagRunStore.ListStatusesPage — update
dagRunListOptionsFromQueryString (or the returned dagRunListOptions.query slice
built by buildDAGRunListOptions/dagRunListFilterInput) to append the workspace
filter obtained from the context before returning so GetDAGRunsListData uses it
when calling ListStatusesPage.

1074-1097: ⚠️ Potential issue | 🔴 Critical

Protect alternate step-log read paths with workspace visibility checks.

GetDAGRunStepLog checks workspace visibility, but DownloadDAGRunStepLog and the SSE GetStepLogData path read the same run logs without that check. A scoped user can bypass the protected JSON endpoint by using these alternate paths.

🔒 Proposed fix
 func (a *API) DownloadDAGRunStepLog(ctx context.Context, request api.DownloadDAGRunStepLogRequestObject) (api.DownloadDAGRunStepLogResponseObject, error) {
 	ref := exec.NewDAGRunRef(request.Name, request.DagRunId)
 	dagStatus, err := a.dagRunMgr.GetSavedStatus(ctx, ref)
 	if err != nil {
@@
 			Message: fmt.Sprintf("dag-run ID %s not found for DAG %s", request.DagRunId, request.Name),
 		}, nil
 	}
+	if err := a.requireWorkspaceVisible(ctx, statusWorkspaceName(dagStatus)); err != nil {
+		return nil, err
+	}
 
 	node, err := dagStatus.NodeByName(request.StepName)
@@
 func (a *API) GetStepLogData(ctx context.Context, identifier string) (any, error) {
@@
 	dagStatus, err := a.dagRunMgr.GetSavedStatus(ctx, ref)
 	if err != nil {
 		return nil, fmt.Errorf("dag-run ID %s not found for DAG %s", dagRunId, dagName)
 	}
+	if err := a.requireWorkspaceVisible(ctx, statusWorkspaceName(dagStatus)); err != nil {
+		return nil, err
+	}
 
 	node, err := dagStatus.NodeByName(stepName)

Also applies to: 3422-3439

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns.go` around lines 1074 - 1097,
DownloadDAGRunStepLog reads run log files without enforcing the same workspace
visibility check used by GetDAGRunStepLog, allowing scoped users to bypass
protections; update DownloadDAGRunStepLog to call the same workspace
visibility/authorization check used by GetDAGRunStepLog (i.e., ensure the
workspace access check that runs before dagRunMgr.GetSavedStatus or immediately
after obtaining dagStatus is invoked) before attempting to read the file (before
os.ReadFile on logFile), and apply the identical visibility check to the SSE
GetStepLogData path so both alternate log-read routes enforce workspace
visibility.

1903-2200: ⚠️ Potential issue | 🔴 Critical

Gate sub-DAG read/download/artifact endpoints by workspace.

The sub-DAG details/spec/log/artifact endpoints return data without a workspace visibility check, while the root DAG-run equivalents are protected. Add requireDAGRunVisible or attempt-based workspace checks before reading sub-run contents.

🔒 Proposed fix pattern
 func (a *API) GetSubDAGRunDetails(ctx context.Context, request api.GetSubDAGRunDetailsRequestObject) (api.GetSubDAGRunDetailsResponseObject, error) {
 	root := exec.NewDAGRunRef(request.Name, request.DagRunId)
+	if err := a.requireDAGRunVisible(ctx, root); err != nil {
+		return nil, err
+	}
 	dagStatus, err := withDAGRunReadTimeout(ctx, dagRunReadRequestInfo{
@@
 func (a *API) GetSubDAGRunSpec(ctx context.Context, request api.GetSubDAGRunSpecRequestObject) (api.GetSubDAGRunSpecResponseObject, error) {
 	root := exec.NewDAGRunRef(request.Name, request.DagRunId)
+	if err := a.requireDAGRunVisible(ctx, root); err != nil {
+		return nil, err
+	}
 	attempt, err := a.dagRunStore.FindSubAttempt(ctx, root, request.SubDAGRunId)
@@
 func (a *API) GetSubDAGRunLog(ctx context.Context, request api.GetSubDAGRunLogRequestObject) (api.GetSubDAGRunLogResponseObject, error) {
 	root := exec.NewDAGRunRef(request.Name, request.DagRunId)
+	if err := a.requireDAGRunVisible(ctx, root); err != nil {
+		return nil, err
+	}
 	dagStatus, err := a.dagRunMgr.FindSubDAGRunStatus(ctx, root, request.SubDAGRunId)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns.go` around lines 1903 - 2200, The
sub-DAG endpoints (GetSubDAGRunDetails, GetSubDAGRunSpec, GetSubDAGRunLog,
DownloadSubDAGRunLog, GetSubDAGRunArtifacts, GetSubDAGRunArtifactPreview,
DownloadSubDAGRunArtifact, GetSubDAGRunStepLog, DownloadSubDAGRunStepLog)
currently read sub-run data without enforcing workspace visibility; before any
call that reads sub-run status/spec/logs/artifacts (e.g. before
a.dagRunMgr.FindSubDAGRunStatus, a.dagRunStore.FindSubAttempt,
getSubDAGRunArtifactStatus, openArtifactFile, file reads, or attempt.ReadDAG)
call the existing workspace check helpers (requireDAGRunVisible for root-level
checks or an attempt-based visibility check like requireAttemptWorkspaceVisible)
and return the appropriate 403/NotFound response on failure; ensure you perform
this gate as soon as you construct the root exec.NewDAGRunRef and before any
disk or store access in each of the functions named above so sub-DAG data is
only returned when workspace visibility is satisfied.

2680-2822: ⚠️ Potential issue | 🟠 Major

Re-check execute permission for useCurrentDAGFile reschedules.

The endpoint authorizes the source run, then rescheduleDAGRun may load the current DAG file and enqueue that version. If the DAG file’s workspace changed since the original run, a user with access only to the old workspace can reschedule into a workspace they cannot execute.

🔒 Proposed fix
 	if opts.useCurrentDAGFile {
 		dag, err = a.loadCurrentRescheduleDAG(ctx, storedSourceFile, nameOverride)
 		if err != nil {
 			return rescheduleDAGRunResult{}, err
 		}
+		if err := a.requireExecuteForWorkspace(ctx, dagWorkspaceName(dag)); err != nil {
+			return rescheduleDAGRunResult{}, err
+		}
 	} else {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns.go` around lines 2680 - 2822, The
current reschedule flow authorizes the original DAG run but does not re-check
execute permissions when opts.useCurrentDAGFile loads the current DAG file,
allowing privilege escalation into another workspace; after loading the current
DAG in rescheduleDAGRun (the branch that calls loadCurrentRescheduleDAG) you
must enforce execute permission on that DAG/workspace before proceeding (i.e.,
call the appropriate permission check similar to requireDAGRunExecute — e.g.,
a.requireDAGExecute or an equivalent that accepts the loaded dag or its
workspace/DAG reference) before calling ensureDAGRunIDUnique or enqueueDAGRun;
place this check right after the loadCurrentRescheduleDAG return and return an
authorization error if it fails.

1321-1390: ⚠️ Potential issue | 🔴 Critical

Add workspace visibility checks before returning step messages.

Both step-message endpoints read node status and persisted messages without checking whether the caller can see the root/sub DAG run workspace.

🔒 Proposed fix
 func (a *API) GetDAGRunStepMessages(ctx context.Context, request api.GetDAGRunStepMessagesRequestObject) (api.GetDAGRunStepMessagesResponseObject, error) {
 	ref := exec.NewDAGRunRef(request.Name, request.DagRunId)
 	dagStatus, err := a.dagRunMgr.GetSavedStatus(ctx, ref)
 	if err != nil {
@@
 			Message: fmt.Sprintf("dag-run ID %s not found for DAG %s", request.DagRunId, request.Name),
 		}, nil
 	}
+	if err := a.requireWorkspaceVisible(ctx, statusWorkspaceName(dagStatus)); err != nil {
+		return nil, err
+	}
 
 	node, err := dagStatus.NodeByName(request.StepName)
@@
 func (a *API) GetSubDAGRunStepMessages(ctx context.Context, request api.GetSubDAGRunStepMessagesRequestObject) (api.GetSubDAGRunStepMessagesResponseObject, error) {
 	rootRef := exec.NewDAGRunRef(request.Name, request.DagRunId)
+	if err := a.requireDAGRunVisible(ctx, rootRef); err != nil {
+		return nil, err
+	}
 	dagStatus, err := a.dagRunMgr.FindSubDAGRunStatus(ctx, rootRef, request.SubDAGRunId)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns.go` around lines 1321 - 1390, Both
endpoints return step messages without verifying the caller can see the
DAG-run's workspace; add an explicit workspace visibility check after obtaining
dagStatus (in GetDAGRunStepMessages and GetSubDAGRunStepMessages) and before
accessing node/attempt/ReadStepMessages. Call your existing
workspace/authorization helper (e.g. a.workspaceMgr or a.authz method) to ensure
the caller can view dagStatus's workspace ID (use the DAG status' WorkspaceID or
equivalent field), and if the check fails return the appropriate
not-found/forbidden JSON response instead of proceeding to node.Lookup or
attempt.ReadStepMessages.
🧹 Nitpick comments (7)
internal/service/frontend/api/v1/dagruns_edit_retry.go (1)

66-83: Workspace authz ordering — minor consideration.

The new requireExecuteForWorkspace(dagWorkspaceName(plan.editedDAG)) check correctly re-authorizes against the edited spec's workspace (in case the submitted spec moves the DAG to a different workspace than the source run). Two minor notes:

  1. buildEditRetryPlan performs non-trivial work (reading DAG snapshot, resolving runtime params, loading the inline edited spec) before this workspace check. Since requireDAGRunExecute at lines 66/124 already gates the source DAG-run's workspace, this is acceptable, but be aware that error messages from loadEditedRetryDAG/spec validation can still surface to a caller who would ultimately be denied by the edited-workspace check. If that's considered an info-leak, hoist a cheap workspace check earlier once the edited spec's workspace is known.
  2. if plan != nil is defensive but unreachable-false here: buildEditRetryPlan only returns nil plan alongside a non-nil error, which is already handled. Consider dropping the guard for clarity, or keep it as defense-in-depth — either is fine.

Also applies to: 124-144

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns_edit_retry.go` around lines 66 - 83,
The code calls buildEditRetryPlan (which reads the DAG snapshot and loads the
edited spec via loadEditedRetryDAG) before calling
requireExecuteForWorkspace(dagWorkspaceName(plan.editedDAG)), which can leak
validation/errors from the edited spec for a caller who would later be denied;
either hoist a cheap workspace authorization earlier once you can determine the
edited spec's workspace (obtain edited workspace from the request/parsed spec
and call requireExecuteForWorkspace(...) before performing heavy work in
buildEditRetryPlan), or simplify by removing the unreachable defensive guard "if
plan != nil" around the requireExecuteForWorkspace call (since
buildEditRetryPlan returns nil plan only with an error), choosing one approach
and applying the same change in both occurrences that mirror lines 66–83 and
124–144; reference functions: requireDAGRunExecute, buildEditRetryPlan,
loadEditedRetryDAG, requireExecuteForWorkspace, dagWorkspaceName, and
plan.editedDAG.
ui/src/features/cockpit/components/__tests__/TemplateSelector.test.tsx (1)

23-23: Consider asserting workspaceScope on /dags as well.

The test now verifies workspaceScope: WorkspaceScope.accessible on the /dags/labels query (good). Since appBarValue.workspaceSelection is accessible, the /dags query on line 95–101 should presumably also include workspaceScope. Tightening that assertion would guard against regressions where the /dags query silently stops propagating the scope.

Also applies to: 106-115

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/features/cockpit/components/__tests__/TemplateSelector.test.tsx` at
line 23, The test currently asserts workspaceScope on the `/dags/labels` request
but not on the `/dags` request(s); update the TemplateSelector.test.tsx
assertions so that any mocked request or expectation for the `/dags` endpoint
also includes workspaceScope: WorkspaceScope.accessible (matching
appBarValue.workspaceSelection). Locate the places where `/dags` is
stubbed/checked (the two occurrences near the existing `/dags/labels`
assertions) and add an expectation that the request payload/query contains
workspaceScope: WorkspaceScope.accessible to prevent regressions where the scope
stops being propagated.
internal/persis/filedagrun/store_test.go (1)

849-970: Add a non-nil workspace-filter regression case.

These updated call sites only verify the new parameter is optional. Please add fast-path and standard-path cases where WorkspaceFilter accepts workspace=ops and rejects workspace=prod so the new behavior is covered.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/persis/filedagrun/store_test.go` around lines 849 - 970, Add tests
covering non-nil WorkspaceFilter behavior: in the fast-path test
(TestResolveStatus_FastPath) add two assertions using a workspace filter that
accepts "workspace=ops" (should return non-nil status) and one that accepts
"workspace=prod" (should return nil); similarly, in the standard-path test
(TestResolveStatus_StandardPath) add matching cases where resolveStatus is
called with a WorkspaceFilter that accepts "workspace=ops" and another that
accepts "workspace=prod" and assert resolution and rejection respectively.
Locate calls to store.resolveStatus and the test functions
TestResolveStatus_FastPath and TestResolveStatus_StandardPath to insert these
new checks, constructing the WorkspaceFilter using the same parsing utility used
for other filters (e.g., core.ParseLabelFilter or the WorkspaceFilter type) so
the parameter position passed to resolveStatus mirrors existing calls. Ensure
both fast-path and standard-path variants exercise the new non-nil
workspace-filter parameter.
ui/src/features/cockpit/components/TemplateSelector.tsx (1)

33-54: Remove unused selectedWorkspace prop from the component signature and call sites.

selectedWorkspace is declared in the Props interface and destructured at line 42, but is never used in the component body. The component derives workspace context entirely from appBarContext.workspaceSelection. Remove the parameter from the interface and all three call sites (CockpitToolbar.tsx and the two test cases) to clean up the API surface.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/features/cockpit/components/TemplateSelector.tsx` around lines 33 -
54, The Props interface and TemplateSelector signature include an unused
selectedWorkspace prop; remove selectedWorkspace from the Props type and from
the TemplateSelector parameter destructuring (function TemplateSelector) and
update all call sites (e.g., usages in CockpitToolbar.tsx and the two test
files) to stop passing selectedWorkspace so the component API matches its actual
usage of appBarContext.workspaceSelection.
ui/src/pages/users/UserFormModal.tsx (1)

112-136: Use the generated API client for these user mutations.

The new workspaceAccess payload is sent through hand-written fetch calls, which bypasses generated request typing and shared API plumbing. Route these through the generated client with remoteNode in query params instead. As per coding guidelines, ui/src/**/*.{ts,tsx}: Frontend API types must be generated from OpenAPI spec via pnpm gen:api to maintain type safety with the backend; and ui/**/*.{ts,tsx}: Use client.GET and client.POST calls with remoteNode in query parameters for API interactions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/pages/users/UserFormModal.tsx` around lines 112 - 136, In
UserFormModal replace the hand-written fetch calls inside the submit flow (the
isEditing branch that PATCHes
`${config.apiURL}/users/${user.id}?remoteNode=${remoteNode}` and the create
branch that POSTs to `/users?remoteNode=${remoteNode}`) with the generated API
client calls (use client.PATCH or client.POST as provided by the generated
client), pass remoteNode as a query param, and send the typed payload (including
workspaceAccess and password) directly rather than JSON.stringify and manual
headers; update the submit handler in UserFormModal to call the appropriate
generated method for updating (using user.id and payload) and creating (using
payload + password), so you leverage generated request/response types and shared
plumbing instead of raw fetch.
internal/service/frontend/api/v1/apikeys.go (1)

103-107: Include workspace access in API key create audit details.

Updates log workspace_access, but creates omit the initial grants. Add it here so audit history captures the key’s permission boundary from creation.

Audit detail addition
 	a.logAudit(ctx, audit.CategoryAPIKey, "api_key_create", map[string]any{
-		"key_id":   result.APIKey.ID,
-		"key_name": result.APIKey.Name,
-		"role":     string(result.APIKey.Role),
+		"key_id":           result.APIKey.ID,
+		"key_name":         result.APIKey.Name,
+		"role":             string(result.APIKey.Role),
+		"workspace_access": toAPIWorkspaceAccess(result.APIKey.WorkspaceAccess),
 	})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/apikeys.go` around lines 103 - 107, The
audit entry for API key creation is missing the initial workspace grants; update
the a.logAudit call in the API key create path (the call that uses
audit.CategoryAPIKey and "api_key_create") to include a "workspace_access" field
in the map with the initial grants (e.g. use result.WorkspaceAccess or the API
key's workspace access field such as result.APIKey.WorkspaceAccess) so the audit
record captures the permission boundary at creation.
internal/auth/workspace_access.go (1)

59-68: Redundant slice copy in CloneWorkspaceAccess.

NormalizeWorkspaceAccess already allocates a fresh Grants slice (line 46), so the additional make+copy here is unnecessary work and obscures intent. You can return the normalized value directly.

♻️ Proposed simplification
 func CloneWorkspaceAccess(access *WorkspaceAccess) *WorkspaceAccess {
 	normalized := NormalizeWorkspaceAccess(access)
-	grants := make([]WorkspaceGrant, len(normalized.Grants))
-	copy(grants, normalized.Grants)
-	return &WorkspaceAccess{
-		All:    normalized.All,
-		Grants: grants,
-	}
+	return &normalized
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/auth/workspace_access.go` around lines 59 - 68, The
CloneWorkspaceAccess function performs an unnecessary make+copy of
normalized.Grants because NormalizeWorkspaceAccess already returns a freshly
allocated Grants slice; simplify CloneWorkspaceAccess by returning the result of
NormalizeWorkspaceAccess directly (i.e., remove the intermediate
make([]WorkspaceGrant, ...) and copy and just return the normalized value from
CloneWorkspaceAccess), keeping the function signature and behavior identical.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d1122f52-9925-4d9f-8933-6bd9dcab9259

📥 Commits

Reviewing files that changed from the base of the PR and between eefbef9 and 445ad95.

📒 Files selected for processing (66)
  • api/v1/api.gen.go
  • api/v1/api.yaml
  • internal/auth/apikey.go
  • internal/auth/user.go
  • internal/auth/workspace_access.go
  • internal/auth/workspace_access_test.go
  • internal/core/exec/context.go
  • internal/core/exec/dag.go
  • internal/core/exec/dagrun.go
  • internal/core/exec/workspace.go
  • internal/persis/filedag/store.go
  • internal/persis/filedag/store_test.go
  • internal/persis/filedagrun/pagination.go
  • internal/persis/filedagrun/store.go
  • internal/persis/filedagrun/store_test.go
  • internal/service/auth/service.go
  • internal/service/frontend/api/v1/api.go
  • internal/service/frontend/api/v1/apikeys.go
  • internal/service/frontend/api/v1/auth.go
  • internal/service/frontend/api/v1/dagruns.go
  • internal/service/frontend/api/v1/dagruns_edit_retry.go
  • internal/service/frontend/api/v1/dags.go
  • internal/service/frontend/api/v1/docs.go
  • internal/service/frontend/api/v1/queues.go
  • internal/service/frontend/api/v1/search.go
  • internal/service/frontend/api/v1/users.go
  • internal/service/frontend/api/v1/workspace_access.go
  • internal/service/frontend/api/v1/workspace_access_test.go
  • internal/service/frontend/api/v1/workspaces.go
  • internal/service/frontend/auth/middleware.go
  • internal/service/oidcprovision/service.go
  • ui/src/App.tsx
  • ui/src/api/v1/schema.ts
  • ui/src/components/WorkspaceAccessEditor.tsx
  • ui/src/contexts/AppBarContext.ts
  • ui/src/contexts/AuthContext.tsx
  • ui/src/features/cockpit/components/DateKanbanList.tsx
  • ui/src/features/cockpit/components/TemplateSelector.tsx
  • ui/src/features/cockpit/components/WorkspaceSelector.tsx
  • ui/src/features/cockpit/components/__tests__/TemplateSelector.test.tsx
  • ui/src/features/cockpit/hooks/useCockpitState.ts
  • ui/src/features/cockpit/hooks/useDateKanbanData.ts
  • ui/src/features/dags/components/common/CreateDAGModal.tsx
  • ui/src/features/search/components/SearchResult.tsx
  • ui/src/features/search/components/__tests__/SearchResult.test.tsx
  • ui/src/hooks/useDAGRunsListSSE.ts
  • ui/src/hooks/useDAGsListSSE.ts
  • ui/src/hooks/useDocSSE.ts
  • ui/src/hooks/useDocTreeSSE.ts
  • ui/src/lib/workspace.ts
  • ui/src/lib/workspaceAccess.ts
  • ui/src/menu.tsx
  • ui/src/pages/__tests__/index.test.tsx
  • ui/src/pages/api-keys/APIKeyFormModal.tsx
  • ui/src/pages/cockpit/index.tsx
  • ui/src/pages/dag-runs/index.tsx
  • ui/src/pages/dags/dag/index.tsx
  • ui/src/pages/dags/index.tsx
  • ui/src/pages/design/buildWorkflowDesignPrompt.ts
  • ui/src/pages/design/index.tsx
  • ui/src/pages/docs/components/DocEditor.tsx
  • ui/src/pages/docs/components/DocTreeSidebar.tsx
  • ui/src/pages/docs/index.tsx
  • ui/src/pages/index.tsx
  • ui/src/pages/search/index.tsx
  • ui/src/pages/users/UserFormModal.tsx
💤 Files with no reviewable changes (1)
  • internal/service/frontend/api/v1/api.go

Comment thread api/v1/api.yaml
Comment thread internal/core/exec/workspace.go
Comment thread internal/service/frontend/api/v1/dags.go Outdated
Comment thread internal/service/frontend/api/v1/queues.go Outdated
Comment thread internal/service/frontend/api/v1/workspace_access.go
Comment thread ui/src/contexts/AuthContext.tsx
Comment thread ui/src/lib/workspace.ts
Comment thread ui/src/lib/workspaceAccess.ts Outdated
Comment thread ui/src/menu.tsx Outdated
Comment thread ui/src/pages/users/UserFormModal.tsx
@yottahmd
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
internal/persis/fileserviceregistry/registry.go (1)

177-188: ⚠️ Potential issue | 🟡 Minor

Only remove the instance file after a confirmed heartbeat stop.

Line 188 still runs after the timeout case on Line 180, when reg.wg.Wait() has not completed. That means the heartbeat may still be alive and can race with deletion/recreation, contradicting the new shutdown ordering guarantee.

🐛 Proposed fix
+		stopped := false
 		select {
 		case <-done:
 			// Clean shutdown
+			stopped = true
 		case <-time.After(5 * time.Second):
 			// Force shutdown after timeout
 			logger.Warn(ctx, "Timeout waiting for registry shutdown",
 				tag.Service(serviceName))
 		}
 
+		if !stopped {
+			continue
+		}
+
 		// Remove instance file after heartbeat stops so it cannot recreate the
 		// file during shutdown on Windows.
 		if err := removeInstanceFile(reg.fileName); err != nil {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/persis/fileserviceregistry/registry.go` around lines 177 - 188, The
instance file is being removed even if the heartbeat goroutine may still be
running when the select times out; ensure removal happens only after the
heartbeat goroutine has actually exited by waiting on the registry waitgroup.
After the select (which logs the timeout), call reg.wg.Wait() (or otherwise
ensure reg.wg has completed) before calling removeInstanceFile(reg.fileName), so
that removeInstanceFile only runs once the heartbeat goroutine has fully
stopped; reference the done channel, reg.wg.Wait, removeInstanceFile and
reg.fileName to locate and update the code.
ui/src/api/v1/schema.ts (1)

6082-6086: ⚠️ Potential issue | 🟠 Major

Add explicit mutation scope to DAG creation.

createNewDAG still only accepts remoteNode, so the new explicit workspace contract cannot represent “create in no workspace” vs “create in this named workspace.” Add workspaceScope?: WorkspaceMutationScope plus legacy-compatible workspace?: Workspace here, matching the document mutation endpoints.

Please update the OpenAPI source and regenerate this file instead of editing it directly. As per coding guidelines, Frontend API types must be generated from OpenAPI spec via pnpm gen:api to maintain type safety with the backend.

Suggested generated-shape change
                 /** `@description` name of the remote node */
                 remoteNode?: components["parameters"]["RemoteNode"];
+                /** `@description` Explicit mutable workspace scope: no-workspace data or one named workspace */
+                workspaceScope?: components["parameters"]["WorkspaceMutationScope"];
+                /** `@description` workspace name used when workspaceScope=workspace, or legacy selected workspace scope when workspaceScope is omitted */
+                workspace?: components["parameters"]["Workspace"];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/api/v1/schema.ts` around lines 6082 - 6086, The createNewDAG request
shape currently only accepts remoteNode; update the OpenAPI source so the
parameters.query for createNewDAG includes workspaceScope?:
WorkspaceMutationScope and legacy-compatible workspace?: Workspace (in addition
to remoteNode) to allow explicit "no workspace" vs "named workspace" semantics,
then regenerate the frontend types (do not hand-edit ui/src/api/v1/schema.ts) by
running the prescribed generator (pnpm gen:api) so the generated type for
parameters.query reflects WorkspaceMutationScope and Workspace alongside
remoteNode.
api/v1/api.yaml (1)

9696-9702: ⚠️ Potential issue | 🟠 Major

Require an explicit workspace access policy on identity creation.

workspaceAccess is optional on create requests but required on returned User/APIKey objects. That leaves generated clients free to create identities without an explicit workspace policy, making the server default security-sensitive and unclear. Add it to required, or encode/document a safe backward-compatible default.

Contract fix if explicit policy is required
       required:
         - username
         - password
         - role
+        - workspaceAccess
       required:
         - name
         - role
+        - workspaceAccess

Also applies to: 9858-9863

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/v1/api.yaml` around lines 9696 - 9702, The create-request schemas for
identities omit workspaceAccess from their required arrays while returned
User/APIKey objects include it; update the OpenAPI component schemas used for
identity creation (the create request schemas that contain the workspaceAccess
property) to add "workspaceAccess" to their required list and ensure the
property still references "#/components/schemas/WorkspaceAccess"; also update
any examples/docs for those schemas to include a valid WorkspaceAccess value (or
alternatively document/encode a safe backward-compatible default if you intend
not to require it).
internal/service/frontend/api/v1/dags.go (1)

376-394: ⚠️ Potential issue | 🔴 Critical

Don't ignore GetDetails error before gating history visibility.

dag, _ := a.dagStore.GetDetails(...) discards the error and then calls requireWorkspaceVisible(ctx, dagWorkspaceName(dag)). When loading fails (missing file, I/O error, or any path where dag == nil), dagWorkspaceName(nil) returns "", which:

  • lets callers with no-workspace / admin access sail past the check and receive a 200 with empty history for a non-existent DAG (instead of the 404 every other handler returns), and
  • on non-ENOENT failures, can let a user who has access to the "no-workspace" scope see history for a DAG that actually belongs to a workspace they cannot access (the fileName is still used by ListRecentStatus/readHistoryData downstream).

Other handlers in this file (GetDAGSpec, GetDAGDAGRunDetails, getDAGDetailsData) treat GetDetails errors explicitly — please do the same here so visibility gating always runs against a resolved DAG.

🔒 Proposed fix
 func (a *API) GetDAGDAGRunHistory(ctx context.Context, request api.GetDAGDAGRunHistoryRequestObject) (api.GetDAGDAGRunHistoryResponseObject, error) {
-	dag, _ := a.dagStore.GetDetails(ctx, request.FileName, spec.WithAllowBuildErrors())
-	if err := a.requireWorkspaceVisible(ctx, dagWorkspaceName(dag)); err != nil {
+	dag, err := a.dagStore.GetDetails(ctx, request.FileName, spec.WithAllowBuildErrors())
+	if err != nil || dag == nil {
+		return nil, &Error{
+			HTTPStatus: http.StatusNotFound,
+			Code:       api.ErrorCodeNotFound,
+			Message:    fmt.Sprintf("DAG %s not found", request.FileName),
+		}
+	}
+	if err := a.requireWorkspaceVisible(ctx, dagWorkspaceName(dag)); err != nil {
 		return nil, err
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dags.go` around lines 376 - 394,
GetDAGDAGRunHistory currently ignores the error returned from
a.dagStore.GetDetails which allows calling requireWorkspaceVisible with a nil
dag; change the logic to check the error from a.dagStore.GetDetails immediately
(same pattern as GetDAGSpec/GetDAGDAGRunDetails/getDAGDetailsData): if err !=
nil return the appropriate error response (propagate the error/translate to a
404 or other API error as the other handlers do) before computing
dagWorkspaceName(dag) or calling requireWorkspaceVisible, then continue to call
a.resolveDAGName, a.dagRunMgr.ListRecentStatus and readHistoryData only when dag
is non-nil and GetDetails succeeded.
internal/service/frontend/api/v1/search.go (1)

213-238: ⚠️ Potential issue | 🟡 Minor

Post-query visibility filter can return pages much smaller than the requested Limit.

docStore.SearchCursor is called with Limit and returns up to that many raw items; the in-place filter at lines 228-234 then drops all items the caller cannot see. When most matching docs live outside the caller's accessible set (e.g. the global search page on a multi-workspace instance where the caller only has access to a few), the response may contain only a handful of items even though HasMore and NextCursor are set. Clients will need many round-trips to fill a page, and users will see sparse results.

If the underlying store doesn't support workspace-aware filtering yet, consider at minimum over-fetching (e.g. request Limit * N and stop once Limit visible items are accumulated while still returning the store's next cursor) or looping the cursor server-side until the page is full or the cursor is exhausted. Same concern applies to SearchDagMatches style pagination if its underlying store doesn't push the workspace filter down.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/search.go` around lines 213 - 238, The
current post-query filtering after a.docStore.SearchCursor can yield far fewer
visible items than the requested Limit because result.Items is trimmed via
visibility.visible, so change SearchCursor handling in the search handler to
ensure the returned page contains up to the requested visible Limit: call
a.docStore.SearchCursor with an increased fetch window (e.g. Limit * N) or loop
repeatedly using the returned cursor to accumulate visible items into
result.Items until either you have populated the requested visible Limit or the
store cursor is exhausted, preserving the store's HasMore/NextCursor semantics
for the response; ensure the logic references docStore.SearchCursor, the
request.Params.Limit (normalized by normalizeSearchLimit/searchDefaultLimit),
visibility.visible, and returns via toDocSearchFeedResponse with adjusted
result.Items and the original/updated cursor/HasMore values.
♻️ Duplicate comments (1)
api/v1/api.yaml (1)

1659-1662: ⚠️ Potential issue | 🟠 Major

Disallow aggregate scope for single-resource lookups.

These endpoints still accept workspaceScope=accessible while targeting only one fileName/path. With duplicate resources across workspaces, the request is ambiguous and SearchMatchesResponse cannot identify which workspace each snippet came from. Use a non-aggregate read scope (none | workspace) here, or require workspaceScope=workspace plus workspace for follow-up calls.

Contract-level fix option
-        - $ref: "#/components/parameters/WorkspaceScope"
+        - $ref: "#/components/parameters/WorkspaceLookupScope"
         - $ref: "#/components/parameters/Workspace"
+    WorkspaceLookupScope:
+      name: workspaceScope
+      in: query
+      description: "Explicit target scope for single-resource reads: no-workspace data or one named workspace"
+      required: false
+      schema:
+        $ref: "#/components/schemas/WorkspaceLookupScope"
+
     WorkspaceMutationScope:
       name: workspaceScope
       in: query
       description: "Explicit mutable workspace scope: no-workspace data or one named workspace"
+    WorkspaceLookupScope:
+      type: string
+      description: "Workspace scope selector for single-resource read APIs"
+      enum:
+        - none
+        - workspace
+
     WorkspaceMutationScope:
       type: string
       description: "Workspace scope selector for mutation APIs"
       enum:
         - none
         - workspace

Also applies to: 1711-1719, 6451-6459

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/v1/api.yaml` around lines 1659 - 1662, The endpoints referencing the
parameter set ($ref: "#/components/parameters/DAGFileName",
"#/components/parameters/RemoteNode", "#/components/parameters/WorkspaceScope",
"#/components/parameters/Workspace") are wrongly allowing an aggregate
workspaceScope for single-resource lookups (fileName/path) which makes results
ambiguous; update the API spec for those operations to disallow the aggregate
scope by replacing or constraining the WorkspaceScope parameter to a
non-aggregate read scope (e.g., only allow "none" | "workspace") or require
workspaceScope="workspace" together with the Workspace parameter for these
single-resource endpoints (apply the same change for the other occurrences
referenced at lines ~1711-1719 and ~6451-6459).
🧹 Nitpick comments (2)
internal/runtime/manager_test.go (1)

239-248: Assert the node-level error field here.

Line 243 checks latest.Error, but stale-local-run repair records the stale-process message on latest.Nodes[0].Error in this same test file’s stale-repair case. Add the node-level assertion so this test verifies the exact field that would expose an accidental repair.

Test assertion tightening
 	require.Equal(t, core.Running, latest.Status)
 	require.Equal(t, core.NodeRunning, latest.Nodes[0].Status)
 	require.Empty(t, latest.Error)
+	require.Empty(t, latest.Nodes[0].Error)
 
 	persisted, err := att.ReadStatus(ctx)
 	require.NoError(t, err)
 	require.Equal(t, core.Running, persisted.Status)
 	require.Equal(t, core.NodeRunning, persisted.Nodes[0].Status)
+	require.Empty(t, persisted.Nodes[0].Error)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/runtime/manager_test.go` around lines 239 - 248, The test asserts
that latest.Error is empty but the stale-local-run repair writes its message to
the node-level field; update the assertions after calling
th.DAGRunMgr.GetLatestStatus and att.ReadStatus to also check
latest.Nodes[0].Error and persisted.Nodes[0].Error (or the appropriate node
index) is empty (or equals the expected value) so the test verifies the
node-level error field that stale-repair would set; locate the checks around
latest, persisted and their Nodes slices in manager_test.go and add the
node-level assertions referencing latest.Nodes[0].Error and
persisted.Nodes[0].Error.
ui/src/pages/dags/index.tsx (1)

82-120: Tab-closing effect appears redundant with TabProvider key remount.

In the outer DAGs component (lines 447-456), dagTabStorageKey incorporates workspaceScopeKey and is passed as both key and storageKey to TabProvider. When the workspace selection changes, key change causes TabProvider to unmount/remount with fresh state, so tabs is already empty by the time DAGsContent re-renders. The effect at lines 112-120 then sees the ref mismatch on the new mount... except previousWorkspaceScopeKeyRef is (re)initialized to the current workspaceScopeKey on mount, so the early return fires and no tabs are iterated anyway.

Net effect: the loop body effectively never runs. If this is intentional defensive code, fine; otherwise consider removing the effect and ref to reduce noise, since the key-based remount is the actual mechanism clearing tabs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/pages/dags/index.tsx` around lines 82 - 120, The effect using
previousWorkspaceScopeKeyRef to close tabs is redundant because DAGs uses
dagTabStorageKey (which includes workspaceScopeKey) as the key/storageKey for
TabProvider causing a remount that clears tabs; remove the redundant logic by
deleting the previousWorkspaceScopeKeyRef declaration and the React.useEffect
block that compares previousWorkspaceScopeKeyRef.current to workspaceScopeKey
and iterates over tabs calling closeTab (also remove any now-unused imports or
variables if flagged by the compiler).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ui/src/api/v1/schema.ts`:
- Around line 7038-7041: The workspaceScope field on the generated API types
(workspaceScope?: components["parameters"]["WorkspaceScope"] and workspace?:
components["parameters"]["Workspace"]) allows aggregate values like
"accessible", which is unsafe for path-addressed reads such as getDoc and
searchDocMatches; update the OpenAPI spec to restrict the WorkspaceScope used by
those endpoints to non-aggregate types (e.g., disallow "accessible" or use a
dedicated enum for single-document reads), then regenerate the TypeScript types
(run pnpm gen:api) so the generated schema.ts reflects the tighter type and
prevents aggregate scopes for getDoc and searchDocMatches.

In `@ui/src/contexts/AuthContext.tsx`:
- Around line 173-201: The guard in useCanWrite and useCanExecute incorrectly
treats an undefined appBarContext.workspaceSelection as permissive; change the
check so undefined selection is treated as aggregate (fail-closed) by returning
false when workspaceSelection is missing or its scope equals
WorkspaceScope.accessible (i.e., replace the single condition
appBarContext.workspaceSelection?.scope === WorkspaceScope.accessible with a
check like !appBarContext.workspaceSelection ||
appBarContext.workspaceSelection.scope === WorkspaceScope.accessible), keeping
the rest of the logic (useAuth user check, config.authMode handling, and calls
to effectiveWorkspaceRole and roleAtLeast) intact.

---

Outside diff comments:
In `@api/v1/api.yaml`:
- Around line 9696-9702: The create-request schemas for identities omit
workspaceAccess from their required arrays while returned User/APIKey objects
include it; update the OpenAPI component schemas used for identity creation (the
create request schemas that contain the workspaceAccess property) to add
"workspaceAccess" to their required list and ensure the property still
references "#/components/schemas/WorkspaceAccess"; also update any examples/docs
for those schemas to include a valid WorkspaceAccess value (or alternatively
document/encode a safe backward-compatible default if you intend not to require
it).

In `@internal/persis/fileserviceregistry/registry.go`:
- Around line 177-188: The instance file is being removed even if the heartbeat
goroutine may still be running when the select times out; ensure removal happens
only after the heartbeat goroutine has actually exited by waiting on the
registry waitgroup. After the select (which logs the timeout), call
reg.wg.Wait() (or otherwise ensure reg.wg has completed) before calling
removeInstanceFile(reg.fileName), so that removeInstanceFile only runs once the
heartbeat goroutine has fully stopped; reference the done channel, reg.wg.Wait,
removeInstanceFile and reg.fileName to locate and update the code.

In `@internal/service/frontend/api/v1/dags.go`:
- Around line 376-394: GetDAGDAGRunHistory currently ignores the error returned
from a.dagStore.GetDetails which allows calling requireWorkspaceVisible with a
nil dag; change the logic to check the error from a.dagStore.GetDetails
immediately (same pattern as GetDAGSpec/GetDAGDAGRunDetails/getDAGDetailsData):
if err != nil return the appropriate error response (propagate the
error/translate to a 404 or other API error as the other handlers do) before
computing dagWorkspaceName(dag) or calling requireWorkspaceVisible, then
continue to call a.resolveDAGName, a.dagRunMgr.ListRecentStatus and
readHistoryData only when dag is non-nil and GetDetails succeeded.

In `@internal/service/frontend/api/v1/search.go`:
- Around line 213-238: The current post-query filtering after
a.docStore.SearchCursor can yield far fewer visible items than the requested
Limit because result.Items is trimmed via visibility.visible, so change
SearchCursor handling in the search handler to ensure the returned page contains
up to the requested visible Limit: call a.docStore.SearchCursor with an
increased fetch window (e.g. Limit * N) or loop repeatedly using the returned
cursor to accumulate visible items into result.Items until either you have
populated the requested visible Limit or the store cursor is exhausted,
preserving the store's HasMore/NextCursor semantics for the response; ensure the
logic references docStore.SearchCursor, the request.Params.Limit (normalized by
normalizeSearchLimit/searchDefaultLimit), visibility.visible, and returns via
toDocSearchFeedResponse with adjusted result.Items and the original/updated
cursor/HasMore values.

In `@ui/src/api/v1/schema.ts`:
- Around line 6082-6086: The createNewDAG request shape currently only accepts
remoteNode; update the OpenAPI source so the parameters.query for createNewDAG
includes workspaceScope?: WorkspaceMutationScope and legacy-compatible
workspace?: Workspace (in addition to remoteNode) to allow explicit "no
workspace" vs "named workspace" semantics, then regenerate the frontend types
(do not hand-edit ui/src/api/v1/schema.ts) by running the prescribed generator
(pnpm gen:api) so the generated type for parameters.query reflects
WorkspaceMutationScope and Workspace alongside remoteNode.

---

Duplicate comments:
In `@api/v1/api.yaml`:
- Around line 1659-1662: The endpoints referencing the parameter set ($ref:
"#/components/parameters/DAGFileName", "#/components/parameters/RemoteNode",
"#/components/parameters/WorkspaceScope", "#/components/parameters/Workspace")
are wrongly allowing an aggregate workspaceScope for single-resource lookups
(fileName/path) which makes results ambiguous; update the API spec for those
operations to disallow the aggregate scope by replacing or constraining the
WorkspaceScope parameter to a non-aggregate read scope (e.g., only allow "none"
| "workspace") or require workspaceScope="workspace" together with the Workspace
parameter for these single-resource endpoints (apply the same change for the
other occurrences referenced at lines ~1711-1719 and ~6451-6459).

---

Nitpick comments:
In `@internal/runtime/manager_test.go`:
- Around line 239-248: The test asserts that latest.Error is empty but the
stale-local-run repair writes its message to the node-level field; update the
assertions after calling th.DAGRunMgr.GetLatestStatus and att.ReadStatus to also
check latest.Nodes[0].Error and persisted.Nodes[0].Error (or the appropriate
node index) is empty (or equals the expected value) so the test verifies the
node-level error field that stale-repair would set; locate the checks around
latest, persisted and their Nodes slices in manager_test.go and add the
node-level assertions referencing latest.Nodes[0].Error and
persisted.Nodes[0].Error.

In `@ui/src/pages/dags/index.tsx`:
- Around line 82-120: The effect using previousWorkspaceScopeKeyRef to close
tabs is redundant because DAGs uses dagTabStorageKey (which includes
workspaceScopeKey) as the key/storageKey for TabProvider causing a remount that
clears tabs; remove the redundant logic by deleting the
previousWorkspaceScopeKeyRef declaration and the React.useEffect block that
compares previousWorkspaceScopeKeyRef.current to workspaceScopeKey and iterates
over tabs calling closeTab (also remove any now-unused imports or variables if
flagged by the compiler).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bc9ad8ac-b903-4fcf-afbb-080cd7cd1c1a

📥 Commits

Reviewing files that changed from the base of the PR and between 445ad95 and 21c6267.

📒 Files selected for processing (30)
  • api/v1/api.gen.go
  • api/v1/api.yaml
  • internal/core/exec/dag.go
  • internal/core/exec/workspace.go
  • internal/core/exec/workspace_test.go
  • internal/intg/distr/execution_test.go
  • internal/persis/filedag/store.go
  • internal/persis/filedag/store_test.go
  • internal/persis/fileserviceregistry/registry.go
  • internal/persis/fileserviceregistry/types.go
  • internal/runtime/manager.go
  • internal/runtime/manager_test.go
  • internal/service/frontend/api/v1/dags.go
  • internal/service/frontend/api/v1/docs.go
  • internal/service/frontend/api/v1/queues.go
  • internal/service/frontend/api/v1/search.go
  • internal/service/frontend/api/v1/transformer.go
  • internal/service/frontend/api/v1/workspace_access.go
  • internal/service/frontend/api/v1/workspaces.go
  • ui/src/api/v1/schema.ts
  • ui/src/components/WorkspaceAccessEditor.tsx
  • ui/src/contexts/AppBarContext.ts
  • ui/src/contexts/AuthContext.tsx
  • ui/src/lib/workspace.ts
  • ui/src/lib/workspaceAccess.ts
  • ui/src/menu.tsx
  • ui/src/pages/dags/index.tsx
  • ui/src/pages/docs/components/DocEditor.tsx
  • ui/src/pages/docs/index.tsx
  • ui/src/pages/users/UserFormModal.tsx
✅ Files skipped from review due to trivial changes (2)
  • internal/intg/distr/execution_test.go
  • ui/src/pages/docs/index.tsx
🚧 Files skipped from review as they are similar to previous changes (6)
  • ui/src/menu.tsx
  • ui/src/pages/users/UserFormModal.tsx
  • internal/persis/filedag/store.go
  • ui/src/components/WorkspaceAccessEditor.tsx
  • internal/service/frontend/api/v1/queues.go
  • internal/service/frontend/api/v1/docs.go

Comment thread ui/src/api/v1/schema.ts Outdated
Comment thread ui/src/contexts/AuthContext.tsx
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
api/v1/api.yaml (1)

1661-1662: ⚠️ Potential issue | 🟠 Major

Constrain DAG match lookups to a single workspace.

/search/dags/{fileName}/matches is keyed only by fileName; allowing workspaceScope=accessible can make duplicate file names across accessible workspaces ambiguous. Mirror the docs match endpoint and use the narrower scope here.

Proposed contract fix
-        - $ref: "#/components/parameters/WorkspaceScope"
+        - $ref: "#/components/parameters/WorkspaceMutationScope"
         - $ref: "#/components/parameters/Workspace"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/v1/api.yaml` around lines 1661 - 1662, The DAG matches endpoint
/search/dags/{fileName}/matches currently includes both parameters $ref:
"#/components/parameters/WorkspaceScope" and $ref:
"#/components/parameters/Workspace", which allows ambiguous lookups across
multiple workspaces; remove the WorkspaceScope parameter and keep only $ref:
"#/components/parameters/Workspace" so the endpoint is constrained to a single
workspace (mirror the docs match endpoint behavior) to avoid duplicate fileName
ambiguity.
ui/src/api/v1/schema.ts (1)

6973-6976: ⚠️ Potential issue | 🟠 Major

Use a non-aggregate scope for DAG match lookups.

searchDagMatches targets one fileName, but WorkspaceScope still allows accessible. If two workspaces contain the same DAG file name, aggregate scope can fetch or merge snippets from the wrong DAG. Use a non-aggregate scope here, or explicitly reject accessible in the OpenAPI contract and generated type.

Suggested generated-shape change
-                /** `@description` Explicit workspace scope: accessible data, no-workspace data, or one named workspace */
-                workspaceScope?: components["parameters"]["WorkspaceScope"];
+                /** `@description` Explicit DAG match scope: no-workspace data or one named workspace */
+                workspaceScope?: components["parameters"]["WorkspaceMutationScope"];

Please update the OpenAPI source and regenerate this file instead of editing schema.ts directly. As per coding guidelines, Frontend API types must be generated from OpenAPI spec via pnpm gen:api to maintain type safety with the backend.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/api/v1/schema.ts` around lines 6973 - 6976, The OpenAPI-generated
types for searchDagMatches currently allow an aggregate WorkspaceScope
(accessible), which can mix DAGs with the same fileName across workspaces;
update the OpenAPI spec to restrict the searchDagMatches operation's
workspaceScope parameter to a non-aggregate scope (e.g., exclude or disallow
"accessible") or explicitly require a single-workspace scope, leaving the
workspace parameter as the selected workspace, then re-run the generator (pnpm
gen:api) so the generated schema.ts (affecting workspaceScope and workspace)
reflects the non-aggregate constraint; do not edit schema.ts directly.
🧹 Nitpick comments (1)
internal/service/frontend/api/v1/docs.go (1)

776-791: Minor: avoid the redundant initial docPointReadScopeForParams call.

When hasQuery is true, the result of the unconditional call on line 777 is immediately discarded and replaced on line 787. This still performs a workspace list lookup via noWorkspaceDocVisibility for nothing. Consider structuring as an if/else to only compute visibility once.

♻️ Proposed refactor
 	path, queryString, hasQuery := strings.Cut(docID, "?")
-	workspaceName, visibility, err := a.docPointReadScopeForParams(ctx, nil, nil)
-	if err != nil {
-		return nil, err
-	}
+	var (
+		workspaceName string
+		visibility    docWorkspaceVisibility
+		err           error
+	)
 	if hasQuery {
 		params, err := url.ParseQuery(queryString)
 		if err != nil {
 			return nil, err
 		}
 		scopeParam, workspaceParam := workspaceMutationScopeParamsFromValues(params)
 		workspaceName, visibility, err = a.docPointReadScopeForParams(ctx, scopeParam, workspaceParam)
 		if err != nil {
 			return nil, err
 		}
+	} else {
+		workspaceName, visibility, err = a.docPointReadScopeForParams(ctx, nil, nil)
+		if err != nil {
+			return nil, err
+		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/docs.go` around lines 776 - 791, The current
code calls a.docPointReadScopeForParams(ctx, nil, nil) unconditionally and then
overwrites its result if hasQuery is true; remove the redundant initial call and
instead branch: if hasQuery, parse queryString, call
workspaceMutationScopeParamsFromValues(params) and then call
a.docPointReadScopeForParams(ctx, scopeParam, workspaceParam); else call
a.docPointReadScopeForParams(ctx, nil, nil). Use the existing path, queryString,
hasQuery variables and preserve error handling for url.ParseQuery and both
docPointReadScopeForParams calls.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/v1/api.yaml`:
- Around line 6999-7006: The API accepts different workspace-name formats across
endpoints: the query parameter "Workspace" and selector/grant paths use pattern
"^[A-Za-z0-9_-]+$", but create/update workspace request schemas allow any
string; unify them by introducing a single reusable workspace-name schema (e.g.,
components/schemas/WorkspaceName) that sets type: string and pattern:
"^[A-Za-z0-9_-]+$", then replace inline definitions by $ref to that schema for
the "Workspace" parameter, all createWorkspace/updateWorkspace request body
schemas, and any WorkspaceAccess/selector/grant path schemas so every place uses
the same validated pattern.

---

Duplicate comments:
In `@api/v1/api.yaml`:
- Around line 1661-1662: The DAG matches endpoint
/search/dags/{fileName}/matches currently includes both parameters $ref:
"#/components/parameters/WorkspaceScope" and $ref:
"#/components/parameters/Workspace", which allows ambiguous lookups across
multiple workspaces; remove the WorkspaceScope parameter and keep only $ref:
"#/components/parameters/Workspace" so the endpoint is constrained to a single
workspace (mirror the docs match endpoint behavior) to avoid duplicate fileName
ambiguity.

In `@ui/src/api/v1/schema.ts`:
- Around line 6973-6976: The OpenAPI-generated types for searchDagMatches
currently allow an aggregate WorkspaceScope (accessible), which can mix DAGs
with the same fileName across workspaces; update the OpenAPI spec to restrict
the searchDagMatches operation's workspaceScope parameter to a non-aggregate
scope (e.g., exclude or disallow "accessible") or explicitly require a
single-workspace scope, leaving the workspace parameter as the selected
workspace, then re-run the generator (pnpm gen:api) so the generated schema.ts
(affecting workspaceScope and workspace) reflects the non-aggregate constraint;
do not edit schema.ts directly.

---

Nitpick comments:
In `@internal/service/frontend/api/v1/docs.go`:
- Around line 776-791: The current code calls a.docPointReadScopeForParams(ctx,
nil, nil) unconditionally and then overwrites its result if hasQuery is true;
remove the redundant initial call and instead branch: if hasQuery, parse
queryString, call workspaceMutationScopeParamsFromValues(params) and then call
a.docPointReadScopeForParams(ctx, scopeParam, workspaceParam); else call
a.docPointReadScopeForParams(ctx, nil, nil). Use the existing path, queryString,
hasQuery variables and preserve error handling for url.ParseQuery and both
docPointReadScopeForParams calls.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2d43bd40-c996-459b-a44c-a5f83ae729e8

📥 Commits

Reviewing files that changed from the base of the PR and between 21c6267 and 72f8182.

📒 Files selected for processing (17)
  • api/v1/api.gen.go
  • api/v1/api.yaml
  • internal/runtime/manager.go
  • internal/runtime/manager_test.go
  • internal/service/frontend/api/v1/docs.go
  • internal/service/frontend/api/v1/search.go
  • internal/service/frontend/api/v1/workspace_access.go
  • ui/src/api/v1/schema.ts
  • ui/src/contexts/AuthContext.tsx
  • ui/src/contexts/DocTabContext.tsx
  • ui/src/features/search/components/SearchResult.tsx
  • ui/src/hooks/useDocSSE.ts
  • ui/src/lib/workspace.ts
  • ui/src/pages/docs/components/DocEditor.tsx
  • ui/src/pages/docs/components/DocTabEditorPanel.tsx
  • ui/src/pages/docs/components/DocTreeSidebar.tsx
  • ui/src/pages/docs/index.tsx
🚧 Files skipped from review as they are similar to previous changes (6)
  • internal/runtime/manager_test.go
  • ui/src/pages/docs/components/DocEditor.tsx
  • internal/runtime/manager.go
  • ui/src/hooks/useDocSSE.ts
  • ui/src/pages/docs/components/DocTreeSidebar.tsx
  • ui/src/lib/workspace.ts

Comment thread api/v1/api.yaml Outdated
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
internal/service/frontend/api/v1/dags.go (2)

376-394: ⚠️ Potential issue | 🟠 Major

Visibility check can pass on non-existent DAG when GetDetails fails.

At line 377, the error from a.dagStore.GetDetails is dropped with _. If the DAG doesn't exist (or the store fails), dag is nil, so dagWorkspaceName(dag) returns "" and requireWorkspaceVisible(ctx, "") will succeed for any caller with no-workspace visibility. The handler then continues with resolveDAGName(ctx, request.FileName) (which falls back to fileName) and returns history data, potentially leaking existence/metadata for arbitrary file names to users who only have no-workspace access.

For consistency with getDAGDetailsData (lines 421-427) which properly propagates the load error, consider:

🔒 Proposed fix
-	dag, _ := a.dagStore.GetDetails(ctx, request.FileName, spec.WithAllowBuildErrors())
-	if err := a.requireWorkspaceVisible(ctx, dagWorkspaceName(dag)); err != nil {
-		return nil, err
-	}
+	dag, err := a.dagStore.GetDetails(ctx, request.FileName, spec.WithAllowBuildErrors())
+	if err != nil {
+		return nil, &Error{
+			HTTPStatus: http.StatusNotFound,
+			Code:       api.ErrorCodeNotFound,
+			Message:    fmt.Sprintf("DAG %s not found", request.FileName),
+		}
+	}
+	if err := a.requireWorkspaceVisible(ctx, dagWorkspaceName(dag)); err != nil {
+		return nil, err
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dags.go` around lines 376 - 394,
GetDAGDAGRunHistory drops the error from a.dagStore.GetDetails causing nil dag
to bypass visibility checks; change the call to capture the error (dag, err :=
a.dagStore.GetDetails(...)) and if err != nil return nil, err (same behavior as
getDAGDetailsData), so you only call dagWorkspaceName(dag),
requireWorkspaceVisible, resolveDAGName and readHistoryData when dag was
successfully loaded; ensure you keep the subsequent uses of dag
(dagWorkspaceName, readHistoryData) and dagName resolution unchanged.

734-810: ⚠️ Potential issue | 🟠 Major

GetAllDAGTags duplicates GetAllDAGLabels with multiple issues.

  1. The bodies of GetAllDAGLabels (734-771) and GetAllDAGTags (773-810) are byte-for-byte identical except for the response field name (Labels vs Tags). Both iterate dag.Labels.Strings() in the filtered path and call a.dagStore.LabelList(ctx) in the fallback. Extract a shared helper (e.g., collectLabels(ctx, params)) to eliminate drift risk.

  2. Inside GetAllDAGTags, the local variable is named labels (lines 786, 802) and error messages read "error getting labels" (lines 783, 804), which is confusing in a tags endpoint.

  3. dag.Labels.Strings() and a.dagStore.LabelList(ctx) both include the internal workspace=<name> label without filtering. This leaks workspace names through the tags/labels endpoints and pollutes the tag picker. Add filtering to exclude labels with key WorkspaceLabelKey in both the scoped and fallback paths.

  4. Replace int(^uint(0)>>1) (lines 738, 777) with the named constant math.MaxInt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dags.go` around lines 734 - 810,
GetAllDAGTags is a copy of GetAllDAGLabels and leaks workspace labels and uses
confusing names/constants; extract a shared helper (e.g., collectLabels(ctx,
workspaceScope, workspace) used by GetAllDAGLabels and GetAllDAGTags) that does
the List path (using exec.NewPaginator(1, math.MaxInt)) and fallback to
a.dagStore.LabelList(ctx), deduplicate values, and filter out any label whose
key equals WorkspaceLabelKey before returning; update GetAllDAGTags to use the
helper, rename local variables/errors from "labels" to "tags" where appropriate,
and replace int(^uint(0)>>1) with math.MaxInt everywhere.
api/v1/api.yaml (1)

1530-1538: ⚠️ Potential issue | 🟠 Major

Add workspace scoping to /dags/search.

This legacy DAG search endpoint still cannot request accessible, none, or a named workspace, so it remains inconsistent with the explicit scope model used by the other DAG/search endpoints.

Contract fix
       parameters:
         - $ref: "#/components/parameters/RemoteNode"
+        - $ref: "#/components/parameters/WorkspaceScope"
+        - $ref: "#/components/parameters/Workspace"
         - name: "q"
           in: "query"
           required: true

As per coding guidelines, Generate REST API server code from OpenAPI spec at api/v1/api.yaml using oapi-codegen via make api.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/v1/api.yaml` around lines 1530 - 1538, The /dags/search operation is
missing workspace scoping like other DAG search endpoints; update its parameter
block to include the workspace scoping parameter (same name and schema used by
the other DAG/search endpoints — e.g., the "workspace" query parameter that
accepts "accessible", "none", or a workspace name), ensure it appears alongside
the existing RemoteNode and "q" parameters, and then regenerate server code with
oapi-codegen by running make api; locate the /dags/search operation in
api/v1/api.yaml and mirror the workspace parameter definition and allowed values
used by the other DAG/search endpoints.
♻️ Duplicate comments (1)
api/v1/api.yaml (1)

775-776: ⚠️ Potential issue | 🟠 Major

Make aggregate list results targetable by follow-up APIs.

These endpoints can now return resources across workspaces, but the path-based DAG/DAG-run follow-ups in this spec still generally cannot accept the returned workspace to disambiguate duplicates. Add WorkspaceMutationScope + Workspace to single-resource DAG/DAG-run detail/action endpoints, or constrain aggregate listing where follow-up targeting is unsupported.

Example target-scope pattern
       parameters:
         - $ref: "#/components/parameters/RemoteNode"
+        - $ref: "#/components/parameters/WorkspaceMutationScope"
+        - $ref: "#/components/parameters/Workspace"
         - $ref: "#/components/parameters/DAGFileName"
       parameters:
         - $ref: "#/components/parameters/RemoteNode"
+        - $ref: "#/components/parameters/WorkspaceMutationScope"
+        - $ref: "#/components/parameters/Workspace"
         - $ref: "#/components/parameters/DAGName"
         - $ref: "#/components/parameters/DAGRunId"

As per coding guidelines, Generate REST API server code from OpenAPI spec at api/v1/api.yaml using oapi-codegen via make api.

Also applies to: 1820-1821, 2031-2032

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/v1/api.yaml` around lines 775 - 776, The aggregate list endpoints
currently return resources across workspaces but downstream DAG/DAG-run
single-resource detail/action endpoints lack a way to accept the returned
workspace; update the OpenAPI spec to add the WorkspaceMutationScope and
Workspace parameters to all single-resource DAG and DAG-run detail/action paths
(e.g., the operations referencing DAG, DAG-run, and their detail/action
operations) so callers can disambiguate targets, or alternatively restrict
aggregate listing endpoints to include Workspace/WorkspaceScope when follow-up
targeting is unsupported; ensure you add the $ref entries for
"#/components/parameters/WorkspaceMutationScope" and
"#/components/parameters/Workspace" to those single-resource operations (and
remove or adjust aggregate list params if you choose to constrain listings).
🧹 Nitpick comments (1)
ui/src/features/search/components/SearchResult.tsx (1)

175-254: Optional: extract the per-result loadMore closure into a helper.

The DAG and Doc branches both build a structurally identical item (key, kind, title, link, initialMatches, initialHasMoreMatches, initialNextCursor, loadMore) and their loadMore callbacks only differ in the client.GET endpoint, path params, and the extra workspace query spread. Extracting a small factory (e.g., buildDagItem(result, query, remoteNode, client) / buildDocItem(...)) — or a single generic helper parameterized by endpoint and query-builder — would remove the ~40-line duplication of the response-unwrapping block and keep the items expression compact.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/features/search/components/SearchResult.tsx` around lines 175 - 254,
The items map duplicates construction and response-unwrapping logic; extract the
per-result loadMore closure and surrounding item builder into a helper (e.g.,
buildDagItem and buildDocItem or a single buildItem factory) that returns the
object with key, kind, title, link, initialMatches, initialHasMoreMatches,
initialNextCursor and a loadMore implementation. Use existing helpers
workspaceMutationQueryForWorkspace, visibleDocumentPathForWorkspace, and
workspaceDocumentQueryForWorkspace to compute the per-result params and pass
endpoint, path params (if any), query fields (remoteNode, q, cursor, workspace
spreads) and client into the factory so loadMore only calls client.GET once and
returns the standardized { error, matches, hasMore, nextCursor } shape. Replace
the inline maps for the DAG and Doc branches to call the new factory (e.g.,
buildDagItem(result, query, remoteNode, client) / buildDocItem(result, query,
remoteNode, client)) to remove the duplicated response-unwrapping block.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/service/frontend/api/v1/dags.go`:
- Around line 292-303: Handle and surface fatal errors from a.dagStore.LoadSpec
instead of dropping them: capture the returned err from
a.dagStore.LoadSpec(nextDAG, err := a.dagStore.LoadSpec(...)), and if err != nil
and it's not a build-validation error (i.e., not a core.ErrorList), return that
err immediately (or at minimum log and return) rather than proceeding to
a.dagStore.UpdateSpec; if err != nil but nextDAG != nil (allowed build errors),
allow the workspace check via
requireDAGWriteForWorkspace(dagWorkspaceName(nextDAG)) and continue, and
consider avoiding re-parse by passing the already-loaded nextDAG to an
UpdateSpec variant or short-circuiting UpdateSpec parsing when nextDAG is
available.

In `@ui/src/features/search/components/SearchResult.tsx`:
- Around line 185-229: The DAG result link doesn't include the result's
workspace query like Doc links do, causing the design page to open in the
current workspace instead of the result's workspace; update the DAG result block
(where link: `/dags/${encodeURI(result.fileName)}/spec` and dagWorkspaceQuery is
used) to build a linkSearch similar to the Doc branch (e.g., const linkSearch =
result.workspace ? `?workspace=${encodeURIComponent(result.workspace)}` : '')
and append it to the DAG link so the link becomes
`/dags/${encodeURI(result.fileName)}/spec${linkSearch}`; keep the existing
loadMore usage of dagWorkspaceQuery unchanged.

In `@ui/src/lib/workspace.ts`:
- Around line 252-265: The code currently returns defaultWorkspaceSelection()
immediately when WORKSPACE_STORAGE_KEY exists but sanitizeWorkspaceName(legacy)
is falsy, preventing a possible migration from
LEGACY_COCKPIT_WORKSPACE_STORAGE_KEY; change the logic in the block that reads
WORKSPACE_STORAGE_KEY so that you still removeItem(WORKSPACE_STORAGE_KEY) and
only persist and return when sanitized is truthy (using sanitizeWorkspaceName
and persistWorkspaceSelection), but if sanitized is falsy do NOT return—let the
function fall through to the subsequent legacy cockpit branch (which checks
LEGACY_COCKPIT_WORKSPACE_STORAGE_KEY) and only call defaultWorkspaceSelection()
at the very end if no valid legacy values are found.

---

Outside diff comments:
In `@api/v1/api.yaml`:
- Around line 1530-1538: The /dags/search operation is missing workspace scoping
like other DAG search endpoints; update its parameter block to include the
workspace scoping parameter (same name and schema used by the other DAG/search
endpoints — e.g., the "workspace" query parameter that accepts "accessible",
"none", or a workspace name), ensure it appears alongside the existing
RemoteNode and "q" parameters, and then regenerate server code with oapi-codegen
by running make api; locate the /dags/search operation in api/v1/api.yaml and
mirror the workspace parameter definition and allowed values used by the other
DAG/search endpoints.

In `@internal/service/frontend/api/v1/dags.go`:
- Around line 376-394: GetDAGDAGRunHistory drops the error from
a.dagStore.GetDetails causing nil dag to bypass visibility checks; change the
call to capture the error (dag, err := a.dagStore.GetDetails(...)) and if err !=
nil return nil, err (same behavior as getDAGDetailsData), so you only call
dagWorkspaceName(dag), requireWorkspaceVisible, resolveDAGName and
readHistoryData when dag was successfully loaded; ensure you keep the subsequent
uses of dag (dagWorkspaceName, readHistoryData) and dagName resolution
unchanged.
- Around line 734-810: GetAllDAGTags is a copy of GetAllDAGLabels and leaks
workspace labels and uses confusing names/constants; extract a shared helper
(e.g., collectLabels(ctx, workspaceScope, workspace) used by GetAllDAGLabels and
GetAllDAGTags) that does the List path (using exec.NewPaginator(1, math.MaxInt))
and fallback to a.dagStore.LabelList(ctx), deduplicate values, and filter out
any label whose key equals WorkspaceLabelKey before returning; update
GetAllDAGTags to use the helper, rename local variables/errors from "labels" to
"tags" where appropriate, and replace int(^uint(0)>>1) with math.MaxInt
everywhere.

---

Duplicate comments:
In `@api/v1/api.yaml`:
- Around line 775-776: The aggregate list endpoints currently return resources
across workspaces but downstream DAG/DAG-run single-resource detail/action
endpoints lack a way to accept the returned workspace; update the OpenAPI spec
to add the WorkspaceMutationScope and Workspace parameters to all
single-resource DAG and DAG-run detail/action paths (e.g., the operations
referencing DAG, DAG-run, and their detail/action operations) so callers can
disambiguate targets, or alternatively restrict aggregate listing endpoints to
include Workspace/WorkspaceScope when follow-up targeting is unsupported; ensure
you add the $ref entries for "#/components/parameters/WorkspaceMutationScope"
and "#/components/parameters/Workspace" to those single-resource operations (and
remove or adjust aggregate list params if you choose to constrain listings).

---

Nitpick comments:
In `@ui/src/features/search/components/SearchResult.tsx`:
- Around line 175-254: The items map duplicates construction and
response-unwrapping logic; extract the per-result loadMore closure and
surrounding item builder into a helper (e.g., buildDagItem and buildDocItem or a
single buildItem factory) that returns the object with key, kind, title, link,
initialMatches, initialHasMoreMatches, initialNextCursor and a loadMore
implementation. Use existing helpers workspaceMutationQueryForWorkspace,
visibleDocumentPathForWorkspace, and workspaceDocumentQueryForWorkspace to
compute the per-result params and pass endpoint, path params (if any), query
fields (remoteNode, q, cursor, workspace spreads) and client into the factory so
loadMore only calls client.GET once and returns the standardized { error,
matches, hasMore, nextCursor } shape. Replace the inline maps for the DAG and
Doc branches to call the new factory (e.g., buildDagItem(result, query,
remoteNode, client) / buildDocItem(result, query, remoteNode, client)) to remove
the duplicated response-unwrapping block.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 10fa43c8-6f68-46d0-9963-ae48f0c6b7db

📥 Commits

Reviewing files that changed from the base of the PR and between 72f8182 and a13bb64.

📒 Files selected for processing (10)
  • api/v1/api.gen.go
  • api/v1/api.yaml
  • internal/service/frontend/api/v1/dags.go
  • internal/service/frontend/api/v1/docs.go
  • internal/service/frontend/api/v1/search.go
  • internal/service/frontend/api/v1/search_test.go
  • ui/src/api/v1/schema.ts
  • ui/src/features/search/components/SearchResult.tsx
  • ui/src/features/search/components/__tests__/SearchResult.test.tsx
  • ui/src/lib/workspace.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • ui/src/features/search/components/tests/SearchResult.test.tsx
  • internal/service/frontend/api/v1/search.go
  • internal/service/frontend/api/v1/docs.go

Comment thread internal/service/frontend/api/v1/dags.go Outdated
Comment thread ui/src/features/search/components/SearchResult.tsx Outdated
Comment thread ui/src/lib/workspace.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
api/v1/api.yaml (1)

9683-9704: ⚠️ Potential issue | 🟠 Major

Make the workspaceAccess create default explicit.

workspaceAccess is optional on create requests, but the API contract does not define whether omission means all workspaces, no workspaces, or role-derived access. For an authorization field, make it required or document a safe server default directly on the create request properties.

Contract fix option
 CreateUserRequest:
   type: object
   description: "Request body for creating a new user"
   properties:
@@
     workspaceAccess:
+      description: "Workspace access policy. If omitted, the server applies the legacy default of <documented default>."
       $ref: "#/components/schemas/WorkspaceAccess"
 CreateAPIKeyRequest:
   type: object
   description: "Create API key request"
   properties:
@@
     workspaceAccess:
+      description: "Workspace access policy. If omitted, the server applies the legacy default of <documented default>."
       $ref: "#/components/schemas/WorkspaceAccess"

Or, if callers must always choose explicitly:

   required:
     - name
     - role
+    - workspaceAccess

Also applies to: 9845-9865

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/v1/api.yaml` around lines 9683 - 9704, The CreateUserRequest schema
leaves workspaceAccess ambiguous; update the CreateUserRequest definition to
make workspaceAccess explicit: either add "workspaceAccess" to the required
array if callers must always supply it, or keep it optional but add a "default"
value and a clear "description" explaining the server-side default behavior
(e.g., "default: no access" or "default: same as role") so consumers know the
semantics; adjust the CreateUserRequest schema (and the same CreateUserRequest
occurrence referenced elsewhere) and ensure the WorkspaceAccess schema is
referenced consistently in the description.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@api/v1/api.yaml`:
- Around line 9683-9704: The CreateUserRequest schema leaves workspaceAccess
ambiguous; update the CreateUserRequest definition to make workspaceAccess
explicit: either add "workspaceAccess" to the required array if callers must
always supply it, or keep it optional but add a "default" value and a clear
"description" explaining the server-side default behavior (e.g., "default: no
access" or "default: same as role") so consumers know the semantics; adjust the
CreateUserRequest schema (and the same CreateUserRequest occurrence referenced
elsewhere) and ensure the WorkspaceAccess schema is referenced consistently in
the description.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4c9d0c4c-e2ab-4259-acc5-51cf2e816f08

📥 Commits

Reviewing files that changed from the base of the PR and between a13bb64 and 820d896.

📒 Files selected for processing (3)
  • api/v1/api.gen.go
  • api/v1/api.yaml
  • ui/src/api/v1/schema.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • ui/src/api/v1/schema.ts

Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/v1/api.yaml`:
- Around line 775-776: The exact DAG and DAG-run endpoints lack the
WorkspaceScope/Workspace parameters, so generated clients cannot target a
specific workspace; update each exact operation (e.g., the operations for
/dags/{fileName}, /dags/{fileName}/spec, /dags/{fileName}/start and
/dag-runs/{name}/{dagRunId}) to include the two parameter refs ($ref:
"#/components/parameters/WorkspaceScope" and $ref:
"#/components/parameters/Workspace") in the operation's parameters array (same
pattern used by the aggregate endpoints), or alternatively mark the aggregate
endpoints as single-workspace only if cross-workspace follow-ups aren’t
supported — locate the operations in api.yaml by the path strings and add the
WorkspaceScope and Workspace $ref entries to each operation’s parameters list.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c7853480-da1d-47e9-87cb-570953572ef8

📥 Commits

Reviewing files that changed from the base of the PR and between a13bb64 and 68bec8f.

📒 Files selected for processing (10)
  • api/v1/api.gen.go
  • api/v1/api.yaml
  • internal/service/frontend/api/v1/dags.go
  • internal/service/frontend/api/v1/dags_internal_test.go
  • ui/src/api/v1/schema.ts
  • ui/src/features/search/components/SearchResult.tsx
  • ui/src/features/search/components/__tests__/SearchResult.test.tsx
  • ui/src/lib/__tests__/workspace.test.ts
  • ui/src/lib/workspace.ts
  • ui/src/pages/dags/dag/index.tsx
✅ Files skipped from review due to trivial changes (1)
  • ui/src/lib/tests/workspace.test.ts
🚧 Files skipped from review as they are similar to previous changes (4)
  • ui/src/features/search/components/tests/SearchResult.test.tsx
  • ui/src/pages/dags/dag/index.tsx
  • internal/service/frontend/api/v1/dags.go
  • ui/src/api/v1/schema.ts

Comment thread api/v1/api.yaml
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 20, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
ui/e2e/helpers/e2e.ts (1)

174-187: Optional: document/guard the navigation precondition.

page.evaluate touches localStorage, which requires the page to already be on an app origin. All current callers invoke this after loginViaUI(...) (which navigates to /login and then away), so this works today. If a future caller invokes useNoWorkspaceScope on a fresh page (still at about:blank), the evaluate will throw a SecurityError. Consider either a short JSDoc noting the precondition or defensively calling await page.goto('/') first.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/e2e/helpers/e2e.ts` around lines 174 - 187, The helper useNoWorkspaceScope
currently calls page.evaluate to access localStorage (using
WORKSPACE_SCOPE_STORAGE_KEY, LEGACY_WORKSPACE_STORAGE_KEY,
LEGACY_COCKPIT_WORKSPACE_STORAGE_KEY) which will throw a SecurityError if the
page is still at about:blank; update useNoWorkspaceScope to ensure the page is
on the app origin before touching localStorage — either add a defensive await
page.goto('/') (or the app root) when the page.url() is about:blank, or add
JSDoc describing the precondition that callers must have navigated (e.g., via
loginViaUI) before calling useNoWorkspaceScope so future callers won’t hit the
SecurityError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@ui/e2e/helpers/e2e.ts`:
- Around line 174-187: The helper useNoWorkspaceScope currently calls
page.evaluate to access localStorage (using WORKSPACE_SCOPE_STORAGE_KEY,
LEGACY_WORKSPACE_STORAGE_KEY, LEGACY_COCKPIT_WORKSPACE_STORAGE_KEY) which will
throw a SecurityError if the page is still at about:blank; update
useNoWorkspaceScope to ensure the page is on the app origin before touching
localStorage — either add a defensive await page.goto('/') (or the app root)
when the page.url() is about:blank, or add JSDoc describing the precondition
that callers must have navigated (e.g., via loginViaUI) before calling
useNoWorkspaceScope so future callers won’t hit the SecurityError.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f8f685f8-a48f-4a40-b436-3ed8572c87f7

📥 Commits

Reviewing files that changed from the base of the PR and between 68bec8f and 4dcd6de.

📒 Files selected for processing (3)
  • ui/e2e/auth-flows.spec.ts
  • ui/e2e/dag-crud.spec.ts
  • ui/e2e/helpers/e2e.ts

@yottahmd yottahmd changed the title feat: separate aggregate and no-workspace scopes feat: separate all and default workspace scopes Apr 21, 2026
@yottahmd
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@yottahmd
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
internal/service/frontend/api/v1/docs.go (2)

445-457: Avoid mutating the store-returned doc in place.

doc.ID = visibleDocPath(...) mutates the object returned by a.docStore.Get. If any doc store implementation reuses cached/pooled instances, this leaks workspace-stripped IDs into future reads. Same pattern at Line 787 in GetDocContentData. Prefer computing into the response directly and leaving doc untouched:

🛡️ Suggested change
-	rawID := doc.ID
-	doc.ID = visibleDocPath(workspaceName, doc.ID)
-	resp := toDocResponse(doc)
-	resp.Workspace = docWorkspaceValue(workspaceName, rawID, visibility, false)
+	resp := toDocResponse(doc)
+	resp.Workspace = docWorkspaceValue(workspaceName, doc.ID, visibility, false)
+	resp.Id = visibleDocPath(workspaceName, doc.ID)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/docs.go` around lines 445 - 457, The handler
mutates the store-returned doc by assigning doc.ID = visibleDocPath(...);
instead compute the visible ID for the response without changing doc: keep rawID
:= doc.ID, call resp := toDocResponse(doc) and then set resp.ID =
visibleDocPath(workspaceName, rawID) and resp.Workspace =
docWorkspaceValue(workspaceName, rawID, visibility, false) before returning;
apply the same non-mutating pattern used here to the analogous code in
GetDocContentData (the block around line 787) so store-returned objects are
never modified in place.

132-161: Redundant knownDocWorkspaceNames call when resolving scoped visibility.

Line 134 already loads the workspace names (with required=a.workspaceStore != nil), assigns them to visibility.known, then Line 150 reloads them again with required=true for the scoped-access branch. When the first call succeeded, the second call does identical work against the store. If it didn't (workspaceStore nil), the second call just produces workspaceStoreUnavailable() which can be raised directly.

♻️ Suggested refactor
 func (a *API) docWorkspaceVisibility(ctx context.Context) (docWorkspaceVisibility, error) {
 	visibility := docWorkspaceVisibility{all: true}
 	known, err := a.knownDocWorkspaceNames(ctx, a.workspaceStore != nil)
 	if err != nil {
 		return visibility, err
 	}
 	visibility.known = known
 	if a.authService == nil {
 		return visibility, nil
 	}
 	user, ok := auth.UserFromContext(ctx)
 	if !ok {
 		return visibility, errAuthRequired
 	}
 	access := auth.NormalizeWorkspaceAccess(user.WorkspaceAccess)
 	if access.All {
 		return visibility, nil
 	}
-	known, err = a.knownDocWorkspaceNames(ctx, true)
-	if err != nil {
-		return visibility, err
-	}
+	if known == nil {
+		return visibility, workspaceStoreUnavailable()
+	}
 	visibility.all = false
 	visibility.allowed = make(map[string]struct{}, len(access.Grants))
-	visibility.known = known
 	for _, grant := range access.Grants {
 		visibility.allowed[grant.Workspace] = struct{}{}
 	}
 	return visibility, nil
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/docs.go` around lines 132 - 161, The
function docWorkspaceVisibility currently calls knownDocWorkspaceNames twice;
replace the second call with reuse of the earlier result when possible: after
obtaining known (from knownDocWorkspaceNames(ctx, a.workspaceStore != nil)),
keep using that value when a.workspaceStore != nil, and only return
workspaceStoreUnavailable() if a.workspaceStore == nil and scoped access
requires the store; update the scoped-access branch in docWorkspaceVisibility to
avoid re-invoking knownDocWorkspaceNames unconditionally and instead use the
previously retrieved known or return the appropriate workspaceStoreUnavailable()
error.
internal/service/frontend/api/v1/workspace_access.go (1)

68-106: Preserve the underlying role parse error message.

fromAPIWorkspaceAccess returns the exact auth.ParseRole error (which typically names the offending role value), but at Line 75 that context is discarded in favor of a generic "Invalid workspace role". Surfacing the original error (or at least the invalid value) would make API responses significantly more debuggable for clients submitting malformed grants.

Also, when access.All == true, grants are still iterated and parsed here; an invalid role in an ignored grant list will reject the request even though NormalizeWorkspaceAccess would drop those grants entirely. Consider short-circuiting grant parsing when access.All is true.

♻️ Suggested change
-func fromAPIWorkspaceAccess(access *api.WorkspaceAccess) (*auth.WorkspaceAccess, error) {
+func fromAPIWorkspaceAccess(access *api.WorkspaceAccess) (*auth.WorkspaceAccess, error) {
 	if access == nil {
 		return auth.AllWorkspaceAccess(), nil
 	}
+	if access.All {
+		return &auth.WorkspaceAccess{All: true}, nil
+	}
 
 	grants := make([]auth.WorkspaceGrant, 0, len(access.Grants))
@@
 	parsed, err := fromAPIWorkspaceAccess(access)
 	if err != nil {
-		return nil, badWorkspaceAccessError("Invalid workspace role")
+		return nil, badWorkspaceAccessError(err.Error())
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/workspace_access.go` around lines 68 - 106,
Preserve and surface the original parse error from fromAPIWorkspaceAccess and
avoid parsing/grant validation when access.All is true: first check if access !=
nil && access.All (or use auth.NormalizeWorkspaceAccess(access) to detect All)
and short-circuit to a parsed auth.WorkspaceAccess with All=true so you don't
call fromAPIWorkspaceAccess on ignored grants; otherwise call
fromAPIWorkspaceAccess and if it returns an error return
badWorkspaceAccessError(err.Error()) (instead of the generic "Invalid workspace
role"); continue to use auth.ValidateWorkspaceAccess and
auth.CloneWorkspaceAccess as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@internal/service/frontend/api/v1/docs.go`:
- Around line 445-457: The handler mutates the store-returned doc by assigning
doc.ID = visibleDocPath(...); instead compute the visible ID for the response
without changing doc: keep rawID := doc.ID, call resp := toDocResponse(doc) and
then set resp.ID = visibleDocPath(workspaceName, rawID) and resp.Workspace =
docWorkspaceValue(workspaceName, rawID, visibility, false) before returning;
apply the same non-mutating pattern used here to the analogous code in
GetDocContentData (the block around line 787) so store-returned objects are
never modified in place.
- Around line 132-161: The function docWorkspaceVisibility currently calls
knownDocWorkspaceNames twice; replace the second call with reuse of the earlier
result when possible: after obtaining known (from knownDocWorkspaceNames(ctx,
a.workspaceStore != nil)), keep using that value when a.workspaceStore != nil,
and only return workspaceStoreUnavailable() if a.workspaceStore == nil and
scoped access requires the store; update the scoped-access branch in
docWorkspaceVisibility to avoid re-invoking knownDocWorkspaceNames
unconditionally and instead use the previously retrieved known or return the
appropriate workspaceStoreUnavailable() error.

In `@internal/service/frontend/api/v1/workspace_access.go`:
- Around line 68-106: Preserve and surface the original parse error from
fromAPIWorkspaceAccess and avoid parsing/grant validation when access.All is
true: first check if access != nil && access.All (or use
auth.NormalizeWorkspaceAccess(access) to detect All) and short-circuit to a
parsed auth.WorkspaceAccess with All=true so you don't call
fromAPIWorkspaceAccess on ignored grants; otherwise call fromAPIWorkspaceAccess
and if it returns an error return badWorkspaceAccessError(err.Error()) (instead
of the generic "Invalid workspace role"); continue to use
auth.ValidateWorkspaceAccess and auth.CloneWorkspaceAccess as before.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 1fe9f8b1-479b-4c86-8fb2-47b6452eef7b

📥 Commits

Reviewing files that changed from the base of the PR and between 8776029 and 5772ebc.

📒 Files selected for processing (6)
  • internal/service/frontend/api/v1/docs.go
  • internal/service/frontend/api/v1/docs_test.go
  • internal/service/frontend/api/v1/workspace_access.go
  • ui/src/pages/docs/components/DocTabBar.tsx
  • ui/src/pages/docs/components/DocTabEditorPanel.tsx
  • ui/src/pages/docs/index.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
  • ui/src/pages/docs/components/DocTabEditorPanel.tsx
  • internal/service/frontend/api/v1/docs_test.go

@yottahmd
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
ui/src/menu.tsx (1)

1-1: ⚠️ Potential issue | 🟡 Minor

Add the GPL license header.

This TSX source file is missing the repository-required GPL v3 header.

Proposed fix
+// Copyright (C) 2026 Yota Hamada
+// SPDX-License-Identifier: GPL-3.0-or-later
+
 import {

As per coding guidelines, “**/*.{go,ts,tsx,js}: Apply GPL v3 license headers on source files, managed via make addlicense”.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/menu.tsx` at line 1, The file ui/src/menu.tsx is missing the required
GPL v3 license header; add the standard repository GPLv3 header at the very top
of menu.tsx (before the first import) so it applies to the entire file, or
simply run the repository tool (`make addlicense`) to insert the correct header
automatically; ensure the header matches the project's canonical GPL v3 text and
appears above the first symbol/import in the file.
ui/src/pages/design/index.tsx (2)

387-392: ⚠️ Potential issue | 🟠 Major

Don’t allow new DAG creation from the aggregate all scope.

canCreateNew stays enabled for WorkspaceScope.all, and the POST sends only remoteNode, so creating from “All Workspaces” silently falls back to an implicit/default target. Gate creation to default or workspace, or force an explicit default workspace query before saving.

Suggested direction
   const canCreateNew =
     !selectedDagFile &&
+    workspaceSelection?.scope !== WorkspaceScope.all &&
     !!newDagName &&
     !!validation?.valid &&
     newNameValidation?.isValid !== false;

If product intent is “create in Default” while viewing All, make that explicit in the request/UI instead of relying on legacy fallback.

Also applies to: 492-503

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/pages/design/index.tsx` around lines 387 - 392, canCreateNew currently
allows creation when viewing the aggregate scope (WorkspaceScope.all), causing
the POST to rely on legacy fallback (only remoteNode) and implicitly target a
default workspace; update the gating logic so new DAG creation is only permitted
when the scope is explicit (e.g., WorkspaceScope.default or
WorkspaceScope.workspace) or when an explicit default workspace id is selected.
Concretely, add a scope check to the canCreateNew expression (the variables
involved: canCreateNew, selectedDagFile, newDagName, validation,
newNameValidation) so it returns false if the current scope equals
WorkspaceScope.all, and apply the same guard where new-dag POSTs are constructed
(the code path that uses canSave/canSaveExisting and builds the request payload)
to ensure the request includes an explicit workspace id or is blocked when scope
is all.

162-177: ⚠️ Potential issue | 🟠 Major

Pass workspace scope to spec fetch and save operations, or disable editing from all scope.

When workspaceScope: all is queried at line 167, the /dags list includes DAGs from multiple workspaces. However, the DAGFile response omits workspace ownership metadata, so DAG selection at lines 171–177 identifies a DAG only by fileName. The spec fetch (lines 204–206) and save (lines 451–455) operations then send only { remoteNode }, losing the workspace context entirely.

If the same fileName exists in different workspaces, the spec operations can fetch or update the wrong workspace's DAG. Include the workspace scope in the spec query parameters, or prevent editing when workspaceScope: all is selected and no concrete workspace has been chosen.

Also applies to: 200–206, 451–455

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/pages/design/index.tsx` around lines 162 - 177, The DAG selection
logic uses dagListQuery and selectedDagInWorkspace but the spec fetch/save calls
only send { remoteNode } and drop workspace scope, so if workspaceScope is "all"
a fileName collision across workspaces can fetch/update the wrong DAG; fix by
including the workspace scope/identifier from workspaceQuery (or the concrete
workspace chosen) in the spec GET/POST params/body for the spec endpoints (the
spec fetch and spec save calls that currently only pass remoteNode and
selectedDagFile), or alternatively disable editing when workspaceScope === 'all'
and no concrete workspace is selected so edits cannot be made without explicit
workspace context.
♻️ Duplicate comments (1)
ui/src/menu.tsx (1)

9-22: ⚠️ Potential issue | 🟠 Major

Restore workspace-aware write gating for write-only navigation.

canWrite now ignores workspaceSelection and effective per-workspace role, so write-only links can still appear for aggregate/all scope or a read-only workspace. Reuse the existing workspace-aware permission hook instead of recomputing the global role check here.

Proposed fix
 import {
   useCanAccessSystemStatus,
   useCanViewEventLogs,
   useCanManageWebhooks,
   useCanViewAuditLogs,
-  useAuth,
+  useCanWrite,
   useIsAdmin,
 } from '@/contexts/AuthContext';
@@
-import { roleAtLeast } from '@/lib/workspaceAccess';
-import { UserRole } from '@/api/v1/schema';
@@
-  const { user } = useAuth();
   const hasRbac = useHasFeature('rbac');
   const hasAudit = useHasFeature('audit');
-  const appBarContext = React.useContext(AppBarContext);
-  const canWrite =
-    config.authMode !== 'builtin'
-      ? config.permissions.writeDags
-      : roleAtLeast(user?.role ?? null, UserRole.developer);
+  const canWrite = useCanWrite();

Also applies to: 400-407, 624-669

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/menu.tsx` around lines 9 - 22, The canWrite calculation currently
ignores workspaceSelection and recomputes a global role check with
roleAtLeast/useIsAdmin, causing write-only nav to show incorrectly; replace that
logic so canWrite is derived from the existing workspace-aware permission hook
used elsewhere (i.e., use the per-workspace permission hook instead of
roleAtLeast/useIsAdmin), ensuring write-only links respect the selected
workspace and effective per-workspace role (update occurrences around the
canWrite variable and any places that reference workspaceSelection).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ui/src/App.tsx`:
- Line 23: The /design route and related WriteElement usage currently rely on
the global hasRole/config check, which bypasses the workspace-scoped guard;
update the route and WriteElement calls to use the workspace-aware guard from
useAuth (e.g., call useAuth() and use its workspace permission method such as
canWriteWorkspace/hasWorkspaceAccess for the selected workspace) instead of the
global hasRole function so access is denied when “All workspaces” is selected or
the user lacks developer access to the active workspace; also update imports
(remove hasRole if unused) and apply the same change to the other WriteElement
usages noted around the 145-153 block.

In `@ui/src/menu.tsx`:
- Around line 733-767: The Agent menu is shown when canWrite &&
config.agentEnabled but the routes (/agent-settings, /agent-memory,
/agent-souls) are protected by AdminElement in App.tsx, creating dead-end links
for non-admins; update the NavGroup rendering to include the same admin guard
used by AdminElement (e.g., add the isAdmin/hasAdminPermission check used in
App.tsx) so the condition becomes something like canWrite && config.agentEnabled
&& <the same admin flag/function>, or wrap the NavGroup in the same admin-check
wrapper component, ensuring NavGroup and its NavItem children for
/agent-settings, /agent-memory, /agent-souls are only rendered for users who
pass the AdminElement check.

---

Outside diff comments:
In `@ui/src/menu.tsx`:
- Line 1: The file ui/src/menu.tsx is missing the required GPL v3 license
header; add the standard repository GPLv3 header at the very top of menu.tsx
(before the first import) so it applies to the entire file, or simply run the
repository tool (`make addlicense`) to insert the correct header automatically;
ensure the header matches the project's canonical GPL v3 text and appears above
the first symbol/import in the file.

In `@ui/src/pages/design/index.tsx`:
- Around line 387-392: canCreateNew currently allows creation when viewing the
aggregate scope (WorkspaceScope.all), causing the POST to rely on legacy
fallback (only remoteNode) and implicitly target a default workspace; update the
gating logic so new DAG creation is only permitted when the scope is explicit
(e.g., WorkspaceScope.default or WorkspaceScope.workspace) or when an explicit
default workspace id is selected. Concretely, add a scope check to the
canCreateNew expression (the variables involved: canCreateNew, selectedDagFile,
newDagName, validation, newNameValidation) so it returns false if the current
scope equals WorkspaceScope.all, and apply the same guard where new-dag POSTs
are constructed (the code path that uses canSave/canSaveExisting and builds the
request payload) to ensure the request includes an explicit workspace id or is
blocked when scope is all.
- Around line 162-177: The DAG selection logic uses dagListQuery and
selectedDagInWorkspace but the spec fetch/save calls only send { remoteNode }
and drop workspace scope, so if workspaceScope is "all" a fileName collision
across workspaces can fetch/update the wrong DAG; fix by including the workspace
scope/identifier from workspaceQuery (or the concrete workspace chosen) in the
spec GET/POST params/body for the spec endpoints (the spec fetch and spec save
calls that currently only pass remoteNode and selectedDagFile), or alternatively
disable editing when workspaceScope === 'all' and no concrete workspace is
selected so edits cannot be made without explicit workspace context.

---

Duplicate comments:
In `@ui/src/menu.tsx`:
- Around line 9-22: The canWrite calculation currently ignores
workspaceSelection and recomputes a global role check with
roleAtLeast/useIsAdmin, causing write-only nav to show incorrectly; replace that
logic so canWrite is derived from the existing workspace-aware permission hook
used elsewhere (i.e., use the per-workspace permission hook instead of
roleAtLeast/useIsAdmin), ensuring write-only links respect the selected
workspace and effective per-workspace role (update occurrences around the
canWrite variable and any places that reference workspaceSelection).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 29e2f0b3-8a7d-4e2a-8ecd-51588e2cd0b7

📥 Commits

Reviewing files that changed from the base of the PR and between 5772ebc and 243c5e1.

📒 Files selected for processing (3)
  • ui/src/App.tsx
  • ui/src/menu.tsx
  • ui/src/pages/design/index.tsx

Comment thread ui/src/App.tsx
Comment thread ui/src/menu.tsx Outdated
Comment on lines +733 to +767
{canWrite && config.agentEnabled && (
<NavGroup
groupKey="agent"
icon={<Bot size={18} />}
label="Agent"
isOpen={isOpen}
onClick={onNavItemClick}
customColor={customColor}
/>
<NavItem
to="/agent-memory"
text="Memory"
icon={<Brain size={18} />}
isOpen={isOpen}
onClick={onNavItemClick}
customColor={customColor}
/>
<NavItem
to="/agent-souls"
text="Souls"
icon={<Ghost size={18} />}
isOpen={isOpen}
onClick={onNavItemClick}
basePath="/agent-"
customColor={customColor}
/>
</NavGroup>
>
<NavItem
to="/agent-settings"
text="Settings"
icon={<Bot size={18} />}
isOpen={isOpen}
onClick={onNavItemClick}
customColor={customColor}
/>
<NavItem
to="/agent-memory"
text="Memory"
icon={<Brain size={18} />}
isOpen={isOpen}
onClick={onNavItemClick}
customColor={customColor}
/>
<NavItem
to="/agent-souls"
text="Souls"
icon={<Ghost size={18} />}
isOpen={isOpen}
onClick={onNavItemClick}
customColor={customColor}
/>
</NavGroup>
)}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align Agent menu visibility with the route guard.

These links are shown for canWrite users, but ui/src/App.tsx protects /agent-settings, /agent-memory, and /agent-souls with AdminElement. This exposes dead-end navigation to non-admin developers.

Proposed fix
-              {canWrite && config.agentEnabled && (
+              {isAdmin && config.agentEnabled && (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/menu.tsx` around lines 733 - 767, The Agent menu is shown when
canWrite && config.agentEnabled but the routes (/agent-settings, /agent-memory,
/agent-souls) are protected by AdminElement in App.tsx, creating dead-end links
for non-admins; update the NavGroup rendering to include the same admin guard
used by AdminElement (e.g., add the isAdmin/hasAdminPermission check used in
App.tsx) so the condition becomes something like canWrite && config.agentEnabled
&& <the same admin flag/function>, or wrap the NavGroup in the same admin-check
wrapper component, ensuring NavGroup and its NavItem children for
/agent-settings, /agent-memory, /agent-souls are only rendered for users who
pass the AdminElement check.

@yottahmd
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
ui/src/App.tsx (1)

230-239: Consider scoping the SWR cache invalidation to workspace-aware endpoints.

Line 238 clears all SWR cache keys with globalMutate(() => true, undefined, { revalidate: false }), which blanks unrelated caches like settings endpoints (/settings/agent/souls, /settings/base-config, etc.) that don't include workspace parameters in their cache keys. Since these endpoints aren't workspace-scoped, they shouldn't be invalidated when workspace scope changes. Limit invalidation to endpoints that actually include workspace parameters (e.g., /dags, /dag-runs, /docs), or explicitly revalidate only the affected keys.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@ui/src/App.tsx` around lines 230 - 239, The global SWR invalidation in
handleSelectWorkspaceScope currently clears every cache key via globalMutate(()
=> true...), and you should instead only invalidate or revalidate
workspace-aware keys; update handleSelectWorkspaceScope (after
sanitizeWorkspaceSelection/persistWorkspaceSelection) to call globalMutate with
a predicate that matches only workspace-scoped endpoints (e.g., keys containing
'/dags', '/dag-runs', '/docs' or whatever your workspace param pattern is) or
explicitly call mutate for each known workspace-aware key list; ensure you
reference and use the same key naming conventions used elsewhere so only
workspace-scoped caches are cleared/revalidated.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@ui/src/App.tsx`:
- Around line 230-239: The global SWR invalidation in handleSelectWorkspaceScope
currently clears every cache key via globalMutate(() => true...), and you should
instead only invalidate or revalidate workspace-aware keys; update
handleSelectWorkspaceScope (after
sanitizeWorkspaceSelection/persistWorkspaceSelection) to call globalMutate with
a predicate that matches only workspace-scoped endpoints (e.g., keys containing
'/dags', '/dag-runs', '/docs' or whatever your workspace param pattern is) or
explicitly call mutate for each known workspace-aware key list; ensure you
reference and use the same key naming conventions used elsewhere so only
workspace-scoped caches are cleared/revalidated.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 54b33874-1de4-48af-a5d0-669c3ff9e4ea

📥 Commits

Reviewing files that changed from the base of the PR and between 5772ebc and cadd5d4.

📒 Files selected for processing (3)
  • ui/src/App.tsx
  • ui/src/menu.tsx
  • ui/src/pages/design/index.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
  • ui/src/pages/design/index.tsx
  • ui/src/menu.tsx

@yottahmd
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/service/frontend/api/v1/dagruns.go (1)

2788-2822: ⚠️ Potential issue | 🔴 Critical

Authorize the workspace that reschedule actually enqueues into.

RescheduleDAGRun authorizes the source DAG-run, but useCurrentDAGFile can load the current DAG definition before enqueueing a new run. If that current definition moved to another workspace, a user with access only to the old run’s workspace could create work in the new workspace. Re-check requireExecuteForWorkspace(ctx, runtimeWorkspaceName(dag, "")) after the target DAG is loaded and before enqueueDAGRun.

🛡️ Proposed target-workspace check
 	if err := a.ensureDAGRunIDUnique(ctx, dag, newDagRunID); err != nil {
 		return rescheduleDAGRunResult{}, err
 	}
+	if err := a.requireExecuteForWorkspace(ctx, runtimeWorkspaceName(dag, "")); err != nil {
+		return rescheduleDAGRunResult{}, err
+	}
 
 	logger.Info(ctx, "Rescheduling dag-run",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns.go` around lines 2788 - 2822, After
loading the target DAG (whether via loadCurrentRescheduleDAG or loadInlineDAG)
but before calling enqueueDAGRun, re-check authorization by calling
requireExecuteForWorkspace(ctx, runtimeWorkspaceName(dag, "")) and return any
error; i.e., add a workspace execute-permission check immediately after
ensureDAGRunIDUnique (or directly after the DAG is prepared) to prevent
enqueueing into a workspace the requester cannot execute in. Reference symbols:
loadCurrentRescheduleDAG, loadInlineDAG, runtimeWorkspaceName,
requireExecuteForWorkspace, enqueueDAGRun, ensureDAGRunIDUnique.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/service/frontend/api/v1/dagruns.go`:
- Around line 1770-1778: The code maps any error from workspaceNameForDAGRun to
a 404 response, hiding other failures; update the error handling in the Delete
DAG run flow to only return DeleteDAGRun404JSONResponse when
workspaceNameForDAGRun returns the specific "not found" error (e.g., compare
against the package's sentinel ErrNotFound or use errors.Is), and for any other
error return/propagate the original error (or wrap and return it) so
storage/read failures are not incorrectly presented as 404; keep the subsequent
call to requireDAGWriteForWorkspace unchanged and use the same
workspaceNameForDAGRun and requireDAGWriteForWorkspace identifiers to locate the
change.
- Around line 1039-1041: The sibling read endpoints that fetch
logs/specs/artifacts/messages/sub-DAG data must enforce the same workspace/DAG
visibility checks: add calls to requireWorkspaceVisible(ctx,
statusWorkspaceName(dagStatus)) and/or requireDAGRunVisible(ctx, dagStatus) at
the top of handlers that currently bypass them (notably DownloadDAGRunStepLog,
GetStepLogData (SSE), the step message APIs, GetSubDAGRuns and the sub-DAG HTTP
read handlers), so that before any log/spec/artifact/message/sub-run is read you
validate visibility and return the error if check fails; ensure you use the same
dagStatus variable (or derive it where those handlers currently only have
dagRunID) and reuse the existing helper functions requireWorkspaceVisible and
requireDAGRunVisible for consistency.
- Around line 1121-1124: The workspace execute preflight (requireDAGRunExecute
called with exec.NewDAGRunRef) is currently resolving attempts itself and leaks
raw lookup errors and rejects the "latest" special-id before handlers like
retryDAGRun/rescheduleDAGRun call resolveAttemptForDAGRun; change the flow so
the handler first resolves the canonical attempt/id (use resolveAttemptForDAGRun
or the handler's existing resolution logic to turn "latest" into a concrete
attempt or return the handler-mapped 404), then call the workspace execute check
with that resolved attempt/ref, or alternatively implement a helper wrapper
around requireDAGRunExecute that accepts a resolved attempt or maps NotFound
errors to the handler's 404 semantics; update calls at exec.NewDAGRunRef sites
(e.g., in retryDAGRun, rescheduleDAGRun and other listed locations) to use the
post-resolution check or the new helper.

---

Outside diff comments:
In `@internal/service/frontend/api/v1/dagruns.go`:
- Around line 2788-2822: After loading the target DAG (whether via
loadCurrentRescheduleDAG or loadInlineDAG) but before calling enqueueDAGRun,
re-check authorization by calling requireExecuteForWorkspace(ctx,
runtimeWorkspaceName(dag, "")) and return any error; i.e., add a workspace
execute-permission check immediately after ensureDAGRunIDUnique (or directly
after the DAG is prepared) to prevent enqueueing into a workspace the requester
cannot execute in. Reference symbols: loadCurrentRescheduleDAG, loadInlineDAG,
runtimeWorkspaceName, requireExecuteForWorkspace, enqueueDAGRun,
ensureDAGRunIDUnique.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 80680c5c-9839-4ad0-9d1e-ca0f4f94dd44

📥 Commits

Reviewing files that changed from the base of the PR and between cadd5d4 and ef75a7c.

📒 Files selected for processing (2)
  • internal/service/frontend/api/v1/dagruns.go
  • internal/service/frontend/api/v1/dagruns_internal_test.go

Comment thread internal/service/frontend/api/v1/dagruns.go
Comment thread internal/service/frontend/api/v1/dagruns.go Outdated
Comment thread internal/service/frontend/api/v1/dagruns.go
@yottahmd
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/service/frontend/api/v1/dagruns.go (1)

3373-3400: ⚠️ Potential issue | 🟡 Minor

Align sub-DAG visibility checks between SSE and REST handlers.

GetSubDAGRunDetailsData (SSE) authorizes only on the root DAG run (requireDAGRunVisible on root), whereas GetSubDAGRunDetails (REST, line 1948) authorizes on the resolved sub-DAG status via requireDAGRunStatusVisible(ctx, dagStatus). If sub-DAG runs can carry different workspace labels than their parent (e.g., cross-workspace sub-DAG dispatch), the SSE endpoint could allow access that REST would deny.

Add requireDAGRunStatusVisible(ctx, dagStatus) after the withDAGRunReadTimeout call in GetSubDAGRunDetailsData to match the REST handler's authorization pattern, ensuring both endpoints enforce consistent workspace visibility rules on the actual sub-DAG being accessed rather than only its parent.

As a secondary concern, the root visibility check currently occurs before the timeout wrapper, unlike the REST path where the sub-status check occurs after timeout completion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns.go` around lines 3373 - 3400,
GetSubDAGRunDetailsData currently only calls requireDAGRunVisible(root) before
fetching the sub-DAG status; add a call to requireDAGRunStatusVisible(ctx,
dagStatus) immediately after the withDAGRunReadTimeout call (using the returned
dagStatus) to enforce visibility on the resolved sub-DAG like the REST handler
does, and optionally move the initial requireDAGRunVisible(root) so both
visibility checks occur after the timeout wrapper to match the REST pattern (use
function names GetSubDAGRunDetailsData, requireDAGRunVisible,
withDAGRunReadTimeout, requireDAGRunStatusVisible, and dagStatus to locate the
changes).
🧹 Nitpick comments (2)
internal/service/frontend/api/v1/dagruns.go (1)

2698-2709: Optional: avoid double FindAttempt lookup in DequeueDAGRun.

workspaceNameForDAGRun calls FindAttempt at line 425 of workspace_access.go, and queueNameForDAGRun calls FindAttempt again at line 17 of queue_ops.go. For the hot path of a user clicking "dequeue", consider resolving the attempt once and deriving both the workspace name and queue name from the same attempt object to reduce storage reads and minimize the TOCTOU window between the two lookups.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns.go` around lines 2698 - 2709,
DequeueDAGRun is performing two separate FindAttempt lookups via
workspaceNameForDAGRun and queueNameForDAGRun which causes extra storage reads
and a TOCTOU risk; change the logic in the DequeueDAGRun flow in dagruns.go to
resolve the attempt once (e.g., call the underlying FindAttempt used by
workspaceNameForDAGRun/queueNameForDAGRun), store the returned attempt object,
then derive workspaceName and queueName from that same attempt (or add a helper
like workspaceAndQueueForDAGRun that returns both from a single FindAttempt) and
replace the two function calls with this single-resolution path using the
attempt to avoid duplicate lookups.
internal/service/frontend/api/v1/dagruns_edit_retry.go (1)

76-83: Inconsistent ordering between Preview and EditRetry when validation errors are present.

EditRetryDAGRun returns validation errors at lines 134-136 before running the workspace authz checks, while PreviewEditRetryDAGRun runs the workspace authz checks first (lines 76-83) and only then reports validation errors as part of the successful 200 response. An authenticated-but-unauthorized caller hitting EditRetryDAGRun can learn validation details about their input, while the same caller on PreviewEditRetryDAGRun gets errInsufficientPermissions instead. Consider running the authz checks in the same position in both handlers.

Separately: in the validation-error path of buildEditRetryPlan (lines 272-280), plan.editedDAG is constructed as &core.DAG{Name: dagName} with no labels, so dagWorkspaceName(plan.editedDAG) returns "". When an empty workspace name is passed to requireExecuteForWorkspace, per the auth.EffectiveRole semantics, the check succeeds if the user's global role permits execution (not a bypass). This means PreviewEditRetryDAGRun validates against the user's global role rather than workspace-scoped permissions in the validation-error path. Confirm this matches intended scope semantics for all/default/named workspaces.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/service/frontend/api/v1/dagruns_edit_retry.go` around lines 76 - 83,
The two handlers have inconsistent authz ordering and a missing workspace on
validation-failure: make PreviewEditRetryDAGRun and EditRetryDAGRun perform
requireDAGRunStatusExecute and requireExecuteForWorkspace in the same order (so
both either validate inputs before workspace authz or vice‑versa) by moving the
requireExecuteForWorkspace call in PreviewEditRetryDAGRun to match
EditRetryDAGRun (or vice versa) around requireDAGRunStatusExecute; also ensure
buildEditRetryPlan sets a deterministic workspace for validation-error paths so
dagWorkspaceName(plan.editedDAG) is not empty (e.g., populate plan.editedDAG
labels or derive the workspace from dagName) so requireExecuteForWorkspace
enforces workspace-scoped permissions rather than falling back to global-role
semantics; update tests to cover validation-error path authorization behavior
for all/default/named workspaces.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@internal/service/frontend/api/v1/dagruns.go`:
- Around line 3373-3400: GetSubDAGRunDetailsData currently only calls
requireDAGRunVisible(root) before fetching the sub-DAG status; add a call to
requireDAGRunStatusVisible(ctx, dagStatus) immediately after the
withDAGRunReadTimeout call (using the returned dagStatus) to enforce visibility
on the resolved sub-DAG like the REST handler does, and optionally move the
initial requireDAGRunVisible(root) so both visibility checks occur after the
timeout wrapper to match the REST pattern (use function names
GetSubDAGRunDetailsData, requireDAGRunVisible, withDAGRunReadTimeout,
requireDAGRunStatusVisible, and dagStatus to locate the changes).

---

Nitpick comments:
In `@internal/service/frontend/api/v1/dagruns_edit_retry.go`:
- Around line 76-83: The two handlers have inconsistent authz ordering and a
missing workspace on validation-failure: make PreviewEditRetryDAGRun and
EditRetryDAGRun perform requireDAGRunStatusExecute and
requireExecuteForWorkspace in the same order (so both either validate inputs
before workspace authz or vice‑versa) by moving the requireExecuteForWorkspace
call in PreviewEditRetryDAGRun to match EditRetryDAGRun (or vice versa) around
requireDAGRunStatusExecute; also ensure buildEditRetryPlan sets a deterministic
workspace for validation-error paths so dagWorkspaceName(plan.editedDAG) is not
empty (e.g., populate plan.editedDAG labels or derive the workspace from
dagName) so requireExecuteForWorkspace enforces workspace-scoped permissions
rather than falling back to global-role semantics; update tests to cover
validation-error path authorization behavior for all/default/named workspaces.

In `@internal/service/frontend/api/v1/dagruns.go`:
- Around line 2698-2709: DequeueDAGRun is performing two separate FindAttempt
lookups via workspaceNameForDAGRun and queueNameForDAGRun which causes extra
storage reads and a TOCTOU risk; change the logic in the DequeueDAGRun flow in
dagruns.go to resolve the attempt once (e.g., call the underlying FindAttempt
used by workspaceNameForDAGRun/queueNameForDAGRun), store the returned attempt
object, then derive workspaceName and queueName from that same attempt (or add a
helper like workspaceAndQueueForDAGRun that returns both from a single
FindAttempt) and replace the two function calls with this single-resolution path
using the attempt to avoid duplicate lookups.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 629fc7a5-4536-441c-a2ad-0bcd4424efb8

📥 Commits

Reviewing files that changed from the base of the PR and between ef75a7c and f072546.

📒 Files selected for processing (3)
  • internal/service/frontend/api/v1/dagruns.go
  • internal/service/frontend/api/v1/dagruns_edit_retry.go
  • internal/service/frontend/api/v1/workspace_access.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/service/frontend/api/v1/workspace_access.go

@yottahmd yottahmd force-pushed the main branch 2 times, most recently from a9de1f4 to bb5e75d Compare April 21, 2026 10:27
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 21, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant