Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (15)
📝 WalkthroughWalkthroughRefinements across composition-runtime, audio decoding, UI cleanup, proxy management, timeline/track selection, analysis abort handling, and provider validation: fixes adjust blob URL and src resolution, prevent audio-buffer regressions, tighten group-track exclusion, add abort-safe async flows, enforce provider uniqueness, and reorder proxy-recommendation state. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/lib/analysis/scene-detection.ts (1)
148-155:⚠️ Potential issue | 🟠 MajorAbort results can still be cached through non-throw paths.
The new catch-path guard is good, but cache writes still happen when abort does not throw (e.g.,
verifyWithVlm()returns early on abort and Line 171 caches it). That can persist partial/unverified cuts.Suggested fix
- const cacheAndReturn = (results: SceneCut[]): SceneCut[] => { - if (cacheKey) resultsCache.set(cacheKey, results); + const cacheAndReturn = (results: SceneCut[]): SceneCut[] => { + if (cacheKey && !signal?.aborted) resultsCache.set(cacheKey, results); return results; }; if (!verificationModel || deduped.length === 0 || signal?.aborted) { return cacheAndReturn(deduped); } @@ - return cacheAndReturn(verified); + return cacheAndReturn(verified);Also applies to: 169-177
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/analysis/scene-detection.ts` around lines 148 - 155, The current early-return uses cacheAndReturn which writes to resultsCache even when the operation was aborted or verification didn't run; change the logic so aborted flows never write to the cache: update cacheAndReturn (or replace its call sites) to check signal?.aborted and skip resultsCache.set when aborted, and ensure the early-return path in the block that checks (!verificationModel || deduped.length === 0 || signal?.aborted) only returns deduped without caching when signal?.aborted is true; also apply the same non-caching behavior to the path after verifyWithVlm() (the code around deduped/verifyWithVlm and lines 169-177) so partial or unverified cuts are never persisted to resultsCache.
🧹 Nitpick comments (1)
src/features/timeline/utils/source-edit-targeting.ts (1)
25-32: Add explicitisGroupfiltering tofindFirstUnlockedTrackByKindandfindNearestUnlockedTrackByKindfor consistency.Both functions filter by
!track.lockedandgetTrackKind(track) === kind, which does exclude group tracks (sincegetTrackKind()returnsnullfor groups). However, other similar functions in the file (findUnlockedTrackByIdline 40,canUseTrackForKindline 44) explicitly check!track.isGroup. Adding this explicit filter maintains consistency with the defensive programming pattern used elsewhere and aligns with the guideline that group tracks are containers only and should never be selected for item placement.Affected functions
Lines 25–32 (
findFirstUnlockedTrackByKind):function findFirstUnlockedTrackByKind( tracks: TimelineTrack[], kind: TrackKind, ): TimelineTrack | null { return [...tracks] .filter((track) => !track.locked && !track.isGroup && getTrackKind(track) === kind) .sort((a, b) => a.order - b.order)[0] ?? null; }Lines 140–154 (
findNearestUnlockedTrackByKind):function findNearestUnlockedTrackByKind( tracks: TimelineTrack[], targetTrack: TimelineTrack, kind: TrackKind, direction: 'above' | 'below' ): TimelineTrack | null { const candidates = tracks .filter((track) => !track.locked && !track.isGroup && getTrackKind(track) === kind) .filter((track) => direction === 'above' ? track.order < targetTrack.order : track.order > targetTrack.order) .sort((a, b) => direction === 'above' ? b.order - a.order : a.order - b.order); return candidates[0] ?? null; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/timeline/utils/source-edit-targeting.ts` around lines 25 - 32, The two functions findFirstUnlockedTrackByKind and findNearestUnlockedTrackByKind should explicitly exclude group tracks for consistency and defensive safety: update their filter pipelines to include !track.isGroup in addition to !track.locked and getTrackKind(track) === kind so group container tracks are never considered as selectable targets for item placement.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/analysis/captioning/lfm-captioning-provider.ts`:
- Around line 102-107: The abort handler inside captionSingle is added
anonymously to the AbortSignal and never removed, causing listener accumulation
across repeated captionSingle calls; update captionSingle (and the similar
handler used in the second block around the other call site) to attach a named
function (e.g., onAbort) via signal.addEventListener('abort', onAbort) and
ensure cleanup removes it (signal.removeEventListener('abort', onAbort)) in the
existing cleanup function, or use { once: true } when adding the listener to
avoid stacking; make sure cleanup is invoked on both resolve and reject so the
abort listener is always removed.
---
Outside diff comments:
In `@src/lib/analysis/scene-detection.ts`:
- Around line 148-155: The current early-return uses cacheAndReturn which writes
to resultsCache even when the operation was aborted or verification didn't run;
change the logic so aborted flows never write to the cache: update
cacheAndReturn (or replace its call sites) to check signal?.aborted and skip
resultsCache.set when aborted, and ensure the early-return path in the block
that checks (!verificationModel || deduped.length === 0 || signal?.aborted) only
returns deduped without caching when signal?.aborted is true; also apply the
same non-caching behavior to the path after verifyWithVlm() (the code around
deduped/verifyWithVlm and lines 169-177) so partial or unverified cuts are never
persisted to resultsCache.
---
Nitpick comments:
In `@src/features/timeline/utils/source-edit-targeting.ts`:
- Around line 25-32: The two functions findFirstUnlockedTrackByKind and
findNearestUnlockedTrackByKind should explicitly exclude group tracks for
consistency and defensive safety: update their filter pipelines to include
!track.isGroup in addition to !track.locked and getTrackKind(track) === kind so
group container tracks are never considered as selectable targets for item
placement.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8948ff1e-6cae-40a5-a446-22143a09069a
📒 Files selected for processing (15)
src/features/composition-runtime/components/composition-content.tsxsrc/features/composition-runtime/components/custom-decoder-buffered-audio.tsxsrc/features/composition-runtime/components/stable-video-sequence.tsxsrc/features/composition-runtime/contexts/nested-media-resolution-context.tsxsrc/features/editor/components/ai-panel.tsxsrc/features/media-library/services/proxy-service.tssrc/features/preview/components/source-monitor.tsxsrc/features/settings/stores/settings-store.test.tssrc/features/settings/stores/settings-store.tssrc/features/timeline/components/timeline-item/item-context-menu.tsxsrc/features/timeline/utils/in-out-points.tssrc/features/timeline/utils/source-edit-targeting.tssrc/lib/analysis/captioning/lfm-captioning-provider.tssrc/lib/analysis/scene-detection.tssrc/shared/utils/provider-registry.ts
Greptile SummaryThis PR is a multi-fix defensive hardening pass: race-condition fixes in audio buffering and sub-comp URL resolution, cleanup/abort improvements in the AI panel and captioning pipeline, group-track exclusion from patch destination logic, NaN guards in in/out-point sanitization, a new The fixes are generally sound. One incomplete fix stands out: the scene-detection cache skips writes when abort occurs during VLM verification, but still caches partial/unverified results when the signal is already aborted before the verification step begins (line 153). Confidence Score: 4/5Safe to merge after addressing the scene-detection partial-result cache bug; remaining findings are style/cleanup. One P1 finding: the abort-before-verification path in scene-detection.ts still caches partial/unverified results, directly contradicting the stated intent of the fix. This causes stale data to be returned in the same session on retry. All other findings are P2. The core race-condition and filtering fixes are correct and well-implemented. src/lib/analysis/scene-detection.ts — line 153 branch caches partial results when signal is already aborted before VLM verification begins. Important Files Changed
Flowchart%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[detectScenes called] --> B{Cache hit?}
B -- Yes --> C[Return cached results]
B -- No --> D[Run detection pass\nhistogram / optical-flow]
D --> E{signal.aborted?}
E -- Yes\n[CURRENT CODE] --> F[cacheAndReturn deduped\n⚠️ caches partial results]
E -- Yes\n[CORRECT FIX] --> G[return deduped\nno cache write]
E -- No --> H{verificationModel set\n& deduped.length gt 0?}
H -- No --> I[cacheAndReturn deduped]
H -- Yes --> J[VLM verification pass]
J --> K{Abort during verify?}
K -- Yes --> L[return deduped\nno cache write ✅ PR fix]
K -- No / success --> M[cacheAndReturn verified]
K -- Error / non-abort --> N[cacheAndReturn deduped]
|
| function captionSingle(worker: Worker, id: number, imageBlob: Blob, signal?: AbortSignal): Promise<string> { | ||
| return new Promise<string>((resolve, reject) => { | ||
| const cleanup = () => { | ||
| worker.removeEventListener('message', onMessage); | ||
| worker.removeEventListener('error', onError); | ||
| }; | ||
|
|
||
| const onMessage = (event: MessageEvent) => { | ||
| if (event.data.type === 'caption' && event.data.id === id) { | ||
| worker.removeEventListener('message', onMessage); | ||
| cleanup(); | ||
| resolve(event.data.caption ?? ''); | ||
| } | ||
| }; | ||
|
|
||
| const onError = (event: ErrorEvent) => { | ||
| cleanup(); | ||
| reject(new Error(event.message || 'Caption worker error')); | ||
| }; | ||
|
|
||
| if (signal?.aborted) { | ||
| reject(signal.reason); | ||
| return; | ||
| } | ||
|
|
||
| signal?.addEventListener('abort', () => { | ||
| cleanup(); | ||
| reject(signal.reason); | ||
| }, { once: true }); | ||
|
|
||
| worker.addEventListener('message', onMessage); | ||
| worker.addEventListener('error', onError); | ||
| worker.postMessage({ type: 'describe', id, image: imageBlob }); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Abort listener not removed by
cleanup()
cleanup() removes the worker's message and error listeners, but the signal.addEventListener('abort', …) listener is left on the signal. If the worker fires an error event (onError → cleanup() → reject()), the abort listener still holds a live closure and could call reject(signal.reason) a second time (harmless per Promise spec, but suboptimal). The idiomatic fix is to include the abort handler in cleanup:
| function captionSingle(worker: Worker, id: number, imageBlob: Blob, signal?: AbortSignal): Promise<string> { | |
| return new Promise<string>((resolve, reject) => { | |
| const cleanup = () => { | |
| worker.removeEventListener('message', onMessage); | |
| worker.removeEventListener('error', onError); | |
| }; | |
| const onMessage = (event: MessageEvent) => { | |
| if (event.data.type === 'caption' && event.data.id === id) { | |
| worker.removeEventListener('message', onMessage); | |
| cleanup(); | |
| resolve(event.data.caption ?? ''); | |
| } | |
| }; | |
| const onError = (event: ErrorEvent) => { | |
| cleanup(); | |
| reject(new Error(event.message || 'Caption worker error')); | |
| }; | |
| if (signal?.aborted) { | |
| reject(signal.reason); | |
| return; | |
| } | |
| signal?.addEventListener('abort', () => { | |
| cleanup(); | |
| reject(signal.reason); | |
| }, { once: true }); | |
| worker.addEventListener('message', onMessage); | |
| worker.addEventListener('error', onError); | |
| worker.postMessage({ type: 'describe', id, image: imageBlob }); | |
| }); | |
| } | |
| function captionSingle(worker: Worker, id: number, imageBlob: Blob, signal?: AbortSignal): Promise<string> { | |
| return new Promise<string>((resolve, reject) => { | |
| const onAbort = () => { | |
| cleanup(); | |
| reject(signal!.reason); | |
| }; | |
| const cleanup = () => { | |
| worker.removeEventListener('message', onMessage); | |
| worker.removeEventListener('error', onError); | |
| signal?.removeEventListener('abort', onAbort); | |
| }; | |
| const onMessage = (event: MessageEvent) => { | |
| if (event.data.type === 'caption' && event.data.id === id) { | |
| cleanup(); | |
| resolve(event.data.caption ?? ''); | |
| } | |
| }; | |
| const onError = (event: ErrorEvent) => { | |
| cleanup(); | |
| reject(new Error(event.message || 'Caption worker error')); | |
| }; | |
| if (signal?.aborted) { | |
| reject(signal.reason); | |
| return; | |
| } | |
| signal?.addEventListener('abort', onAbort, { once: true }); | |
| worker.addEventListener('message', onMessage); | |
| worker.addEventListener('error', onError); | |
| worker.postMessage({ type: 'describe', id, image: imageBlob }); | |
| }); | |
| } |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/analysis/captioning/lfm-captioning-provider.ts
Line: 102-135
Comment:
**Abort listener not removed by `cleanup()`**
`cleanup()` removes the worker's `message` and `error` listeners, but the `signal.addEventListener('abort', …)` listener is left on the signal. If the worker fires an `error` event (`onError → cleanup() → reject()`), the abort listener still holds a live closure and could call `reject(signal.reason)` a second time (harmless per Promise spec, but suboptimal). The idiomatic fix is to include the abort handler in `cleanup`:
```suggestion
function captionSingle(worker: Worker, id: number, imageBlob: Blob, signal?: AbortSignal): Promise<string> {
return new Promise<string>((resolve, reject) => {
const onAbort = () => {
cleanup();
reject(signal!.reason);
};
const cleanup = () => {
worker.removeEventListener('message', onMessage);
worker.removeEventListener('error', onError);
signal?.removeEventListener('abort', onAbort);
};
const onMessage = (event: MessageEvent) => {
if (event.data.type === 'caption' && event.data.id === id) {
cleanup();
resolve(event.data.caption ?? '');
}
};
const onError = (event: ErrorEvent) => {
cleanup();
reject(new Error(event.message || 'Caption worker error'));
};
if (signal?.aborted) {
reject(signal.reason);
return;
}
signal?.addEventListener('abort', onAbort, { once: true });
worker.addEventListener('message', onMessage);
worker.addEventListener('error', onError);
worker.postMessage({ type: 'describe', id, image: imageBlob });
});
}
```
How can I resolve this? If you propose a fix, please make it concise.…ards - Prevent partial audio decode from downgrading a full buffer in custom-decoder-buffered-audio - Preserve existing audioSrc when blob URL is unavailable in sub-comp resolution - Add src/audioSrc to areGroupPropsEqual so media URL changes trigger re-renders - Abort in-flight MusicGen work and revoke blob URLs on ai-panel unmount - Record stale media IDs for error/invalid proxy paths in loadExistingProxies - Exclude group header tracks from patch destination and source edit targeting - Prioritize newest proxy recommendation so new IDs survive the 200-item cap - Stop contextmenu propagation in lazy item context menu trigger - Guard against NaN fps/maxFrame in in-out-points sanitization - Add error/abort handling to captionSingle, validate sampleIntervalSec, redact caption text from logs - Skip scene detection cache writes on abort to avoid caching unverified results - Validate duplicate provider IDs and default provider existence in ProviderRegistry - Fix react-refresh lint warnings in nested-media-resolution-context and ai-panel
Summary
audioSrcfallback when blob URL is unavailable in sub-comp resolution; addsrc/audioSrctoareGroupPropsEqualcomparatorloadExistingProxiesisGroup) from patch destinations in source monitor and source edit targetingcontextmenupropagation in lazy item context menu trigger to prevent parent menus from openingfps/maxFramein in-out-points sanitizationcaptionSingle, validatesampleIntervalSec, redact caption text from logsProviderRegistryconstructorTest plan
timeline-store-facade.test.tsunrelated to this PR)Summary by CodeRabbit
Bug Fixes
Improvements
Tests