Conversation
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/8 reviews remaining, refill in 37 seconds.Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 80b3545cdf
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| File.open(lock_path, File::RDWR) do |lock_file| | ||
| next unless lock_file.flock(File::LOCK_EX | File::LOCK_NB) | ||
|
|
||
| FileUtils.rm_f(lock_path) |
There was a problem hiding this comment.
Avoid unlinking lock file while other callers may have opened it
Removing the lock path in remove_stale_generated_packs_lock can break mutual exclusion when multiple callers arrive during stale-lock recovery: one caller can rm_f the stale path and then lock a newly created inode, while another caller that already opened the old inode (before unlink) later acquires a lock on that old inode. In that case both callers run generate_packs_if_stale concurrently, which defeats the serialization guarantee and can reintroduce concurrent clean/regenerate races after a crash leaves a stale lock file.
Useful? React with 👍 / 👎.
Greptile SummaryThis PR serializes concurrent calls to Confidence Score: 4/5Safe to merge — the locking logic is correct and the race condition it fixes is real; all findings are P2. No P0 or P1 issues found. flock usage, ensure-based unlock, LOCK_NB stale cleanup, and return-inside-block semantics are all correct. Two P2 concerns: a timing-sensitive test assertion that may be flaky on slow CI, and the lack of a pre-lock fast path causing all callers to serialize even when packs are current. react_on_rails/lib/react_on_rails/packs_generator.rb — consider a pre-lock staleness guard for throughput Important Files Changed
Sequence DiagramsequenceDiagram
participant A as Caller A (first)
participant B as Caller B (concurrent)
participant LF as Lock File (tmp/)
participant FS as Pack Files
A->>LF: remove_stale_generated_packs_lock (LOCK_NB check)
A->>LF: File.open + flock(LOCK_EX) → acquired
A->>FS: add_generated_pack_to_server_bundle
A->>FS: clean_non_generated_files_with_feedback
A->>FS: generated_files_present_and_up_to_date? → false
A->>FS: clean_generated_directories
B-->>LF: flock(LOCK_EX) → blocks (A holds lock)
A->>FS: generate_packs
A->>LF: flock(LOCK_UN) → lock released
B->>LF: flock(LOCK_EX) → acquired
B->>FS: add_generated_pack_to_server_bundle
B->>FS: clean_non_generated_files_with_feedback
B->>FS: generated_files_present_and_up_to_date? → true (re-check)
Note over B: skips regeneration
B->>LF: flock(LOCK_UN) → lock released
Reviews (1): Last reviewed commit: "Add changelog for generated pack locking" | Re-trigger Greptile |
| sleep 0.1 | ||
| expect { second_completed.pop(true) }.to raise_error(ThreadError) | ||
|
|
||
| release_generation << true | ||
| first_thread.join | ||
| second_thread.join |
There was a problem hiding this comment.
The sleep 0.1 before the non-blocking pop is not a reliable way to assert that the second thread has reached and is blocked on flock. On a slow CI runner (or under load), the second thread may not have even entered generate_packs_if_stale yet after 100 ms, making second_completed.pop(true) raise ThreadError for the wrong reason — the thread simply hasn't run yet, not because it's blocked on the lock. A more robust pattern uses a second queue/latch that the thread posts to immediately before blocking.
| with_generated_packs_lock(verbose: verbose) do | ||
| add_generated_pack_to_server_bundle | ||
|
|
||
| # Clean any non-generated files from directories | ||
| clean_non_generated_files_with_feedback(verbose: verbose) | ||
| # Clean any non-generated files from directories | ||
| clean_non_generated_files_with_feedback(verbose: verbose) | ||
|
|
||
| are_generated_files_present_and_up_to_date = Dir.exist?(generated_packs_directory_path) && | ||
| File.exist?(generated_server_bundle_file_path) && | ||
| !stale_or_missing_packs? | ||
| if generated_files_present_and_up_to_date? | ||
| puts Rainbow("✅ Generated packs are up to date, no regeneration needed").green if verbose | ||
| return | ||
| end | ||
|
|
||
| if are_generated_files_present_and_up_to_date | ||
| puts Rainbow("✅ Generated packs are up to date, no regeneration needed").green if verbose | ||
| return | ||
| clean_generated_directories_with_feedback(verbose: verbose) | ||
| generate_packs(verbose: verbose) | ||
| end |
There was a problem hiding this comment.
All callers serialize through the lock even when packs are up to date
add_generated_pack_to_server_bundle and clean_non_generated_files_with_feedback run inside the exclusive lock on every request, before the staleness re-check. When packs are already current (the common case after warm-up), every concurrent request still serializes through these two I/O operations and the lock acquire/release cycle. A lightweight pre-lock generated_files_present_and_up_to_date? guard would let subsequent callers exit immediately without contending on the lock.
| File.open(lock_path, File::RDWR) do |lock_file| | ||
| next unless lock_file.flock(File::LOCK_EX | File::LOCK_NB) | ||
|
|
||
| FileUtils.rm_f(lock_path) | ||
| puts Rainbow("🧹 Removed stale generated packs lock at #{lock_path}").yellow if verbose | ||
| ensure | ||
| lock_file&.flock(File::LOCK_UN) | ||
| end |
There was a problem hiding this comment.
LOCK_UN called even when LOCK_NB was never acquired
When flock(LOCK_EX | LOCK_NB) returns false, the next skips the deletion but the ensure still calls flock(LOCK_UN) on a file description that never held the lock. While benign on Linux/macOS, guarding the unlock with an acquired flag improves clarity and correctness on non-POSIX platforms.
| remove_stale_generated_packs_lock(lock_path, verbose: verbose) | ||
|
|
||
| File.open(lock_path, File::RDWR | File::CREAT, 0o644) do |lock_file| | ||
| puts Rainbow("🔒 Waiting for generated packs lock at #{lock_path}").yellow if verbose |
There was a problem hiding this comment.
The "Waiting for generated packs lock" message is printed before flock(LOCK_EX) is called, so it fires even when the lock is acquired immediately (no contention, no wait). Consider moving it after the flock call, or split into two messages — one before ("acquiring...") and one after ("acquired, proceeding...").
| puts Rainbow("🔒 Waiting for generated packs lock at #{lock_path}").yellow if verbose | |
| lock_file.flock(File::LOCK_EX) | |
| puts Rainbow("🔒 Acquired generated packs lock at #{lock_path}").yellow if verbose |
|
|
||
| File.open(lock_path, File::RDWR | File::CREAT, 0o644) do |lock_file| | ||
| puts Rainbow("🔒 Waiting for generated packs lock at #{lock_path}").yellow if verbose | ||
| lock_file.flock(File::LOCK_EX) |
There was a problem hiding this comment.
flock(File::LOCK_EX) blocks indefinitely. The 120-second TTL only removes a stale lock file when no holder is present (the LOCK_NB attempt fails while the holder is still alive). If pack generation hangs (process alive but stuck), every subsequent caller will hang forever with no escape.
Consider using a timeout approach, e.g. a background thread that sends a signal after N seconds, or polling with LOCK_NB in a loop:
deadline = Time.now + GENERATED_PACKS_LOCK_TTL_SECONDS
loop do
break if lock_file.flock(File::LOCK_EX | File::LOCK_NB)
raise "Timed out waiting for generated packs lock" if Time.now > deadline
sleep 0.5
end|
|
||
| yield | ||
| ensure | ||
| lock_file&.flock(File::LOCK_UN) |
There was a problem hiding this comment.
lock_file is the block parameter of File.open and is never nil, so &. (safe navigation) is unnecessary here and in the other ensure below. Minor, but it reads as if lock_file could be nil, which could mislead future readers.
| lock_file&.flock(File::LOCK_UN) | |
| lock_file.flock(File::LOCK_UN) |
| if generated_files_present_and_up_to_date? | ||
| puts Rainbow("✅ Generated packs are up to date, no regeneration needed").green if verbose | ||
| return | ||
| end |
There was a problem hiding this comment.
The return inside a yield-based block works correctly here — Ruby propagates it as a method-return from generate_packs_if_stale, and the ensure in the File.open block still runs, properly releasing the lock. However, this is a subtle Ruby behavior that could surprise contributors who refactor this code (e.g., wrapping the block in a Proc instead of yield would turn it into a LocalJumpError).
A less surprising alternative is to use a boolean guard:
unless generated_files_present_and_up_to_date?
clean_generated_directories_with_feedback(verbose: verbose)
generate_packs(verbose: verbose)
endThis avoids relying on return-from-block semantics entirely.
| sleep 0.1 | ||
| expect { second_completed.pop(true) }.to raise_error(ThreadError) |
There was a problem hiding this comment.
This sleep 0.1 + non-blocking pop(true) pattern is a timing-based assertion: it assumes that 100 ms is long enough for thread 2 to reach the flock call and block, but not so long that CI load causes a false pass. This is a common source of flakiness in CI environments under load.
A more robust approach is to use a Queue (or Mutex + ConditionVariable) to know when thread 2 is actually blocked on the lock, rather than sleeping and hoping. For example, you could stub with_generated_packs_lock to signal a queue when it is entered by the second caller, before the block is executed.
Code ReviewOverall: The approach is sound — using IssuesMedium — indefinite lock wait (no timeout) Medium — Minor — verbose message fires before the lock is acquired Minor — test relies on wall-clock timing Nit — unnecessary safe navigation in Looks good
|
Summary
generate_packs_if_stalewith a Rails tmp lock file so concurrent callers cannot clean and regenerate packs at the same timeFixes #1627
Test plan
Note:
bundle installinsidereact_on_rails/spec/dummyfailed in this fresh worktree while compiling Nokogiri (nokogiri_gumbo.hmissing), so the targeted dummy spec was run through the root bundle with the dummy spec load path.Note
Medium Risk
Touches autobundling pack generation and introduces cross-process locking; misbehavior could cause hangs or skipped regeneration if the lock/TTL logic is wrong. Scope is contained and has dedicated concurrency test coverage.
Overview
Prevents concurrent callers of
generate_packs_if_stalefrom racing by serializing generated pack cleanup/regeneration with an exclusive Railstmp/react_on_rails_generate_packs.lockfile.After waiting for the lock, the generator now re-checks whether packs are already up to date to avoid redundant regeneration, and it cleans up stale lock files after a short TTL. Specs were updated to cover the concurrency behavior and to properly restore
auto_load_bundle, and the CHANGELOG documents the fix.Reviewed by Cursor Bugbot for commit 136def3. Bugbot is set up for automated code reviews on this repo. Configure here.