diff --git a/.github/workflows/ci-docs.yml b/.github/workflows/ci-docs.yml index e561859b..42a1d32a 100644 --- a/.github/workflows/ci-docs.yml +++ b/.github/workflows/ci-docs.yml @@ -247,6 +247,19 @@ jobs: path: code-fence-output.txt retention-days: 7 + # ============================================================================ + # DOC CLAIMS - Check doc comments for misleading claims + # ============================================================================ + doc-claims: + name: Doc Claims Check + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v6 + + - name: Check doc comment accuracy + run: ./scripts/ci/check-doc-claims.sh + link-check: name: Link Check runs-on: ubuntu-latest diff --git a/.gitignore b/.gitignore index eb95552c..1247737d 100644 --- a/.gitignore +++ b/.gitignore @@ -62,3 +62,8 @@ package.json node_modules/ *.pyc *.pyo + +pre-commit.txt +pre-commit.log +pre-push.txt +pre-push.log diff --git a/.llm/context.md b/.llm/context.md index 17f99587..6497099c 100644 --- a/.llm/context.md +++ b/.llm/context.md @@ -216,18 +216,23 @@ For protocol tests that poll in loops (`poll_remote_clients()` / protocol `poll( **Exclude:** internal refactoring, test improvements, doc-only changes, CI/tooling, lint fixes. +**Unreleased code rule:** Never add separate "Fixed" or "Changed" entries for code that has not yet been released. Fixes to unreleased features should be folded into the existing "Added" entry describing that feature. The changelog should describe the final shipped state, not intermediate development history. + ## Mandatory Linting - **After Rust changes:** `cargo fmt && cargo clippy --all-targets --features tokio,json` (or `cargo c`) - **After workflow changes:** `actionlint` (no exceptions) - **After doc changes:** `cargo doc --no-deps` - **After markdown changes:** `npx markdownlint 'file.md' --config .markdownlint.json --fix` +- **After shell-script changes:** `bash scripts/ci/check-shell-portability.sh` - **After `.llm/` changes:** All `.md` files under `.llm/` must be **300 lines or fewer** (enforced by pre-commit hook `llm-line-limit`) - **Link validation:** `./scripts/docs/check-links.sh` - **Spell check:** `typos` - **Vale (advisory):** `vale docs/` -- checks prose quality, non-blocking in CI - **Full pre-commit:** `cargo fmt && cargo clippy --all-targets --features tokio,json && cargo nextest run --no-capture` +Shell regex portability rule: avoid PCRE-style escapes in `grep -E`/`sed -E` (`\b`, `\s`, `\w`, etc.). Use POSIX-safe classes like `[[:space:]]`, `[[:alnum:]_]`, and token boundaries `(^|[^[:alnum:]_])word([^[:alnum:]_]|$)`. + ## Skill Code Examples Code examples in `.llm/skills/` must follow zero-panic rules with these exceptions: diff --git a/.llm/skills/ci-cd-tooling/wiki-sync.md b/.llm/skills/ci-cd-tooling/wiki-sync.md index ef46452c..2d0fb4b3 100644 --- a/.llm/skills/ci-cd-tooling/wiki-sync.md +++ b/.llm/skills/ci-cd-tooling/wiki-sync.md @@ -15,12 +15,12 @@ Fix content in `docs/`, then re-sync. ## Link Format Rules -| Context | Format | Example | -|---------|--------|---------| -| `docs/` files | Lowercase with `.md` | `[Guide](user-guide.md)` | -| `wiki/` files | PascalCase, no extension | `[Guide](User-Guide)` | -| `wiki/_Sidebar.md` | PascalCase, no extension | `[User Guide](User-Guide)` | -| External (non-docs) | Full GitHub URL | `[Code](https://github.com/.../blob/main/src/lib.rs)` | +| Context | Format | Example | +| ------------------- | ------------------------ | ----------------------------------------------------- | +| `docs/` files | Lowercase with `.md` | `[Guide](user-guide.md)` | +| `wiki/` files | PascalCase, no extension | `[Guide](User-Guide)` | +| `wiki/_Sidebar.md` | PascalCase, no extension | `[User Guide](User-Guide)` | +| External (non-docs) | Full GitHub URL | `[Code](https://github.com/.../blob/main/src/lib.rs)` | Use standard markdown `[Text](Page)` syntax in sidebar -- NOT wiki-link `[[Page|Text]]` syntax (has URL generation bugs). @@ -57,6 +57,14 @@ docs/*.md --> sync-wiki.py +-- Add SYNC comment header +-- Generate _Sidebar.md --> wiki/*.md + +Pre-commit enforcement now runs this sequence for docs/wiki changes: + +1. `sync-wiki` regenerates `wiki/*.md` from `docs/` +2. `wiki-consistency` validates links/sidebar/mapping integrity +3. `check-sync-headers` validates reciprocal `` headers + +This prevents committing stale or manually-edited wiki mirrors. ``` ## MkDocs Conversion Patterns @@ -124,14 +132,25 @@ python3 scripts/docs/validate-wiki-output.py # Check rendering issues 3. All wiki pages have sidebar entries 4. No special characters in wiki-link display text +`sync-wiki.py` also validates that every `WIKI_STRUCTURE` page appears in +the generated sidebar and fails fast if any mapped page is missing. + +`sync-wiki.py` now enforces deterministic writer normalization for generated +markdown: non-empty outputs end with exactly one LF newline (trailing +whitespace/newlines are normalized). This prevents churn with +`end-of-file-fixer` and keeps repeated sync runs idempotent. + +Sidebar coverage validation runs before any wiki writes, so an invalid sidebar +template fails early without leaving partial regenerated output. + ### Common Errors -| Error | Fix | -|-------|-----| +| Error | Fix | +| -------------------------------- | ---------------------------- | | Link points to non-existent page | Add file or fix sidebar link | -| `docs/file.md` not mapped | Add to `WIKI_STRUCTURE` | -| Wiki page has no sidebar entry | Add to `generate_sidebar()` | -| Stale WIKI_STRUCTURE mapping | Remove entry | +| `docs/file.md` not mapped | Add to `WIKI_STRUCTURE` | +| Wiki page has no sidebar entry | Add to `generate_sidebar()` | +| Stale WIKI_STRUCTURE mapping | Remove entry | ## Markdown Link Validation @@ -139,11 +158,11 @@ python3 scripts/docs/validate-wiki-output.py # Check rendering issues Links resolve from the directory containing the markdown file: -| From | To Root | Example | -|------|---------|---------| -| `docs/` | `../` | `[README](../README.md)` | -| `.github/` | `../` | `[Context](../.llm/context.md)` | -| `.llm/skills//` | `../../../` | `[README](../../../README.md)` | +| From | To Root | Example | +| ------------------------- | ----------- | ------------------------------- | +| `docs/` | `../` | `[README](../README.md)` | +| `.github/` | `../` | `[Context](../.llm/context.md)` | +| `.llm/skills//` | `../../../` | `[README](../../../README.md)` | ### Heading Anchor Generation Rules @@ -153,11 +172,11 @@ Links resolve from the directory containing the markdown file: 4. ` / ` becomes `--` 5. `~` removed -| Heading | Anchor | -|---------|--------| -| `## Quick Start` | `#quick-start` | +| Heading | Anchor | +| ------------------------------------ | ------------------------------ | +| `## Quick Start` | `#quick-start` | | `## LAN / Local Network (~20ms RTT)` | `#lan--local-network-20ms-rtt` | -| `## Web / WASM Integration` | `#web--wasm-integration` | +| `## Web / WASM Integration` | `#web--wasm-integration` | ### Pipe Escaping in Tables diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 7e760b10..e45ea687 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -102,6 +102,20 @@ repos: pass_filenames: false files: '\.rs$' + - id: check-doc-claims + name: check doc comment accuracy + entry: bash scripts/ci/check-doc-claims.sh + language: system + pass_filenames: false + files: '\.rs$' + + - id: check-derive-bounds + name: check derive bounds + entry: bash scripts/ci/check-derive-bounds.sh + language: system + pass_filenames: false + files: '\.rs$' + # ══════════════════════════════════════════════════════════════════════ # Documentation checks # ══════════════════════════════════════════════════════════════════════ @@ -134,6 +148,13 @@ repos: pass_filenames: false files: '\.md$' + - id: sync-wiki + name: sync wiki mirrors + entry: python scripts/docs/sync-wiki.py + language: python + pass_filenames: false + files: '^(docs/.*\.md|wiki/.*\.md|scripts/docs/sync-wiki\.py)$' + - id: wiki-consistency name: wiki consistency check entry: python scripts/docs/check-wiki-consistency.py @@ -148,6 +169,13 @@ repos: files: '^(docs/.*\.md|wiki/.*\.md)$' pass_filenames: false + - id: changelog-unreleased-rule + name: changelog unreleased code rule + entry: python scripts/hooks/check-changelog-unreleased.py + language: python + pass_filenames: false + files: '^CHANGELOG\.md$' + - id: llm-line-limit name: check .llm file line limit (300) entry: python scripts/hooks/check-llm-line-limit.py @@ -189,6 +217,13 @@ repos: files: '^scripts/(hooks/|build/|docs/|verification/|ci/)?[a-z][^/]*\.py$' pass_filenames: true + - id: check-shell-portability + name: check shell portability + entry: bash scripts/ci/check-shell-portability.sh + language: system + files: '\.sh$' + pass_filenames: false + # ══════════════════════════════════════════════════════════════════════ # CI validation (skip gracefully if tools not installed) # ══════════════════════════════════════════════════════════════════════ diff --git a/.typos.toml b/.typos.toml index 1759fd1f..0a8ea83a 100644 --- a/.typos.toml +++ b/.typos.toml @@ -18,7 +18,8 @@ resimulates = "resimulates" resimulating = "resimulating" # Mathematical/algorithmic variable names -ba = "ba" # b minus a +ba = "ba" # b minus a +ser = "ser" # Mermaid state diagram alias in docs/replay.md and wiki/Replay.md # Technical terms clonable = "clonable" diff --git a/CHANGELOG.md b/CHANGELOG.md index f7000bea..4c4ad5da 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -14,6 +14,34 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## [Unreleased] +## [0.8.0] + +### Changed + +- **Breaking:** `FortressEvent::ReplayDesync` — new variant added. Since `FortressEvent` is not `#[non_exhaustive]`, exhaustive matches must now handle this variant. +- **Breaking:** `InvalidFrameReason::ReplayExhausted` — new variant added. Since `InvalidFrameReason` is not `#[non_exhaustive]`, exhaustive matches must now handle this variant. +- **Breaking:** `Config::Input` now requires `Eq` in addition to `PartialEq`. Types used as `Config::Input` must derive or implement `Eq`. This ensures reflexive equality, which is a correctness requirement for deterministic rollback — non-reflexive types (e.g., floats with `NaN`) would cause phantom prediction misses and unnecessary rollbacks. All integer and struct-of-integer types already implement `Eq`; add `#[derive(Eq)]` to any custom input types that are missing it. + +### Added + +- `ReplaySession::new_with_validation()` constructor that enables checksum validation mode, emitting `SaveGameState` requests, comparing checksums against the replay recording, and flushing final-frame validation when `events()` is drained after completion +- `ReplaySession::is_validating()` accessor to check if checksum validation mode is enabled +- `SessionBuilder::start_replay_session_with_validation()` builder method for creating a validation-enabled replay session +- `SessionTelemetry` trait for observing session performance events (rollbacks, prediction misses, frame advances, network stats) +- `CollectingTelemetry` test helper that accumulates `TelemetryEvent` values for assertions +- `TelemetryEvent` enum with `Rollback`, `PredictionMiss`, `NetworkStatsUpdate`, and `FrameAdvance` variants +- `SessionBuilder::with_telemetry()` to attach a telemetry observer to P2P sessions +- `Replay` type for recorded match data with `to_bytes()` / `from_bytes()` serialization using deterministic bincode codec +- `ReplayMetadata` type containing library version, player count, total frame count, and skipped frame count +- `ReplaySession` session type implementing `Session` for deterministic replay playback +- `SessionBuilder::with_recording(bool)` to enable input recording (including game state checksums) during P2P sessions +- `SessionBuilder::start_replay_session(replay)` to create a replay playback session +- `P2PSession::is_recording()` to check if replay recording is active +- `P2PSession::into_replay()` to extract the recorded `Replay` after a session ends (consumes the session) +- `P2PSession::take_replay()` to extract the recorded `Replay` without consuming the session (recording stops after extraction) +- `Replay::validate()` to verify internal consistency of replay data +- Re-exports `Replay`, `ReplayMetadata`, and `ReplaySession` in prelude + ## [0.7.0] ### Added diff --git a/Cargo.lock b/Cargo.lock index 1d87e7ff..bac3ab86 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -105,15 +105,6 @@ version = "1.0.102" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "7f202df86484c868dbad7eaa557ef785d5c66295e41b460ef922eca0723b842c" -[[package]] -name = "arbitrary" -version = "1.4.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "c3d036a3c4ab069c7b410a2ce876bd74808d2d0888a82667669f8e783a898bf1" -dependencies = [ - "derive_arbitrary", -] - [[package]] name = "atomic-waker" version = "1.1.2" @@ -502,17 +493,6 @@ dependencies = [ "powerfmt", ] -[[package]] -name = "derive_arbitrary" -version = "1.4.2" -source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "1e567bd82dcff979e4b03460c307b3cdc9e96fde3d73bed1496d2bc75d9dd62a" -dependencies = [ - "proc-macro2", - "quote", - "syn", -] - [[package]] name = "digest" version = "0.10.7" @@ -624,7 +604,6 @@ dependencies = [ name = "fortress-rollback" version = "0.7.0" dependencies = [ - "arbitrary", "bincode", "clap", "criterion", diff --git a/Cargo.toml b/Cargo.toml index c24526ec..6d3559ce 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -320,6 +320,12 @@ renamed_function_params = "warn" # Consistent parameter names across try_err = "warn" # Use ? instead of Err(e)? undocumented_unsafe_blocks = "warn" # Require SAFETY comments on unsafe +# cargo-shear: macroquad and z3 are optional deps behind feature flags +# (graphical-examples and z3-verification). They cannot be dev-dependencies +# because dev-deps cannot be feature-gated in Cargo. +[package.metadata.cargo-shear] +ignored = ["macroquad", "z3"] + [features] sync-send = [] # Enable runtime invariant checking in release builds (for debugging production issues) @@ -379,8 +385,6 @@ proptest = "1.11" serde_json = "1.0" # Benchmarking criterion = { version = "0.8", features = ["html_reports"] } -# Additional testing utilities -arbitrary = { version = "1.3", features = ["derive"] } # Macro utilities for data-driven tests pastey = "0.2" diff --git a/benches/p2p_session.rs b/benches/p2p_session.rs index adce51a5..2a236fe6 100644 --- a/benches/p2p_session.rs +++ b/benches/p2p_session.rs @@ -25,7 +25,7 @@ use std::hint::black_box; use std::net::SocketAddr; /// Simple test input type for benchmarking -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct BenchInput { buttons: u8, stick_x: i8, diff --git a/docs/architecture.md b/docs/architecture.md index 03a002d4..abe72a9a 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -1099,7 +1099,7 @@ Compile-time parameterization bundles all type requirements: ```rust // Default (without `sync-send` feature): pub trait Config: 'static { - type Input: Copy + Clone + PartialEq + Default + Serialize + DeserializeOwned; + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned; type State; type Address: Clone + PartialEq + Eq + PartialOrd + Ord + Hash + Debug; } diff --git a/docs/fortress-vs-ggrs.md b/docs/fortress-vs-ggrs.md index 52a83404..2833d2f3 100644 --- a/docs/fortress-vs-ggrs.md +++ b/docs/fortress-vs-ggrs.md @@ -329,6 +329,7 @@ pub enum InvalidFrameReason { NotConfirmed { confirmed_frame: Frame }, NullOrNegative, MissingState, + ReplayExhausted { last_frame: Frame }, Custom(&'static str), } ``` diff --git a/docs/index.md b/docs/index.md index d52d0417..98be467b 100644 --- a/docs/index.md +++ b/docs/index.md @@ -83,7 +83,7 @@ Get up and running with Fortress Rollback in minutes. use std::net::SocketAddr; // Define your input and state types - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct MyInput { buttons: u8 } #[derive(Clone, Serialize, Deserialize)] diff --git a/docs/migration.md b/docs/migration.md index 9c76dcbb..ed56743b 100644 --- a/docs/migration.md +++ b/docs/migration.md @@ -151,6 +151,35 @@ struct MyAddress { } ``` +## Input Trait Bounds (Breaking Change) + +`Config::Input` now requires `Eq` in addition to `PartialEq`. This ensures reflexive +equality for deterministic rollback; non-reflexive types (e.g., `f32`, `f64`) would cause +phantom prediction misses because `NaN != NaN` can make the engine treat identical inputs +as different, triggering unnecessary rollbacks. + +Most custom input types only need an extra derive: + +```rust +// Before +#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +struct MyInput { + buttons: u8, + stick_x: i8, +} + +// After +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] +struct MyInput { + buttons: u8, + stick_x: i8, +} +``` + +> **Note:** All primitive integer types (`u8`, `i8`, `u16`, `i16`, `u32`, `i32`, `u64`, +> `i64`, `u128`, `i128`, `usize`, `isize`) and `bool` already implement `Eq`, so input +> structs composed entirely of these types only need the added derive. + ## Features The `sync-send` feature flag remains compatible. Fortress Rollback adds several new features: diff --git a/docs/replay.md b/docs/replay.md new file mode 100644 index 00000000..df083614 --- /dev/null +++ b/docs/replay.md @@ -0,0 +1,301 @@ + + +

+ Fortress Rollback +

+ +# Match Replay System + +Record P2P matches and play them back deterministically. Use replays for match review, determinism verification, streaming, and testing. + +## Table of Contents + +1. [How It Works](#how-it-works) +2. [Quick Start -- Recording](#quick-start----recording) +3. [Quick Start -- Playback](#quick-start----playback) +4. [Validation Mode](#validation-mode) + - [How Validation Works](#how-validation-works) + - [Validation Playback Loop](#validation-playback-loop) +5. [API Reference](#api-reference) +6. [Use Cases](#use-cases) +7. [Common Patterns](#common-patterns) + - [into_replay vs take_replay](#into_replay-vs-take_replay) + - [Replay Browser with Metadata](#replay-browser-with-metadata) + +--- + +## How It Works + +```mermaid +stateDiagram-v2 + direction LR + + state "P2P Session" as p2p { + [*] --> Recording : with_recording(true) + Recording --> Recording : confirmed inputs + checksums per frame + Recording --> ReplayReady : session ends + } + + state "Serialization" as ser { + ReplayReady --> Bytes : to_bytes() + Bytes --> File : save + } + + state "Playback" as play { + File --> LoadedBytes : load + LoadedBytes --> Replay : from_bytes() + Replay --> ReplaySession : new(replay) + ReplaySession --> FrameByFrame : advance_frame() + FrameByFrame --> Complete : is_complete() + } +``` + +--- + +## Quick Start -- Recording + +```rust +use fortress_rollback::{SessionBuilder, Config, Session, PlayerType, PlayerHandle}; +use std::net::SocketAddr; + +// 1. Enable recording on the session builder +let mut session = SessionBuilder::::new() + .with_num_players(2)? + .add_player(PlayerType::Local, PlayerHandle::new(0))? + .add_player(PlayerType::Remote("10.0.0.2:7000".parse()?), PlayerHandle::new(1))? + .with_recording(true) // <-- enable replay recording + .start_p2p_session(socket)?; + +// 2. Run your game loop (see user-guide.md for the full loop) +// ... + +// 3. Extract the replay when the session ends +let replay = session.into_replay()?; + +// 4. Serialize and save to disk +let bytes = replay.to_bytes()?; +std::fs::write("match.replay", &bytes)?; +``` + +!!! tip "into_replay vs take_replay" + Use `into_replay()` when the session is finished -- it consumes the session. + Use `take_replay()` to extract the replay mid-session without consuming it (e.g., for auto-save). + +--- + +## Quick Start -- Playback + +```rust +use fortress_rollback::replay::Replay; +use fortress_rollback::sessions::replay_session::ReplaySession; +use fortress_rollback::{Config, Session, FortressRequest}; + +// 1. Load and deserialize +let bytes = std::fs::read("match.replay")?; +let replay = Replay::::from_bytes(&bytes)?; + +// 2. Create a ReplaySession +let mut session = ReplaySession::::new(replay)?; + +// 3. Play back frame by frame +while !session.is_complete() { + let requests = session.advance_frame()?; + + for request in requests { + match request { + FortressRequest::AdvanceFrame { inputs } => { + // 4. Apply each player's input to your game state + for (input, status) in &inputs { + game_state.apply_input(*input); + } + game_state.advance(); + } + _ => {} // No other requests in standard playback + } + } +} +``` + +--- + +## Validation Mode + +`ReplaySession::new_with_validation(replay)` enables **checksum comparison** during playback. This detects non-determinism by comparing freshly computed checksums against the ones recorded during the original match. + +### How Validation Works + +```mermaid +sequenceDiagram + participant App as Application + participant RS as ReplaySession + + loop Each Frame + App->>RS: advance_frame() + RS-->>App: SaveGameState { cell, frame } + App->>App: Compute game state checksum + App->>App: Store checksum in cell + RS-->>App: AdvanceFrame { inputs } + App->>App: Apply inputs, advance state + Note over RS: Next frame: compare
stored checksum vs
recorded checksum + end + + alt Checksum mismatch + RS-->>App: FortressEvent::ReplayDesync + end +``` + +!!! determinism "Desync Detection" + When a mismatch is found, the session emits `FortressEvent::ReplayDesync { frame, expected_checksum, actual_checksum }`. This pinpoints the exact frame where non-determinism occurred. + +### Validation Playback Loop + +```rust +use fortress_rollback::replay::Replay; +use fortress_rollback::sessions::replay_session::ReplaySession; +use fortress_rollback::{Config, Session, FortressRequest, FortressEvent}; + +let replay = Replay::::from_bytes(&bytes)?; +let mut session = ReplaySession::::new_with_validation(replay)?; + +while !session.is_complete() { + let requests = session.advance_frame()?; + + for request in requests { + match request { + FortressRequest::SaveGameState { cell, frame } => { + // Compute and store your game state checksum + let checksum = game_state.compute_checksum(); + cell.save(frame, Some(game_state.clone()), Some(checksum)); + } + FortressRequest::AdvanceFrame { inputs } => { + for (input, status) in &inputs { + game_state.apply_input(*input); + } + game_state.advance(); + } + _ => {} + } + } + + // Check for desync events + for event in session.events() { + match event { + FortressEvent::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } => { + eprintln!( + "DESYNC at frame {}: expected {:#x}, got {:#x}", + frame.as_i32(), + expected_checksum, + actual_checksum + ); + } + _ => {} + } + } +} +``` + +--- + +## API Reference + +### Replay\ + +| Item | Description | +|------|-------------| +| `num_players` | Number of players in the recorded match | +| `frames` | `Vec>` -- inputs per frame, one entry per player | +| `checksums` | `Vec>` -- per-frame checksums for validation | +| `metadata` | `ReplayMetadata` -- version, player count, frame count | +| `to_bytes()` | Serialize to bytes (deterministic bincode codec) | +| `from_bytes(&[u8])` | Deserialize from bytes | +| `total_frames()` | Number of recorded frames | +| `validate()` | Check internal consistency (frames/checksums/metadata) | + +### ReplaySession\ + +| Method | Description | +|--------|-------------| +| `new(replay)` | Standard playback (validates replay on construction) | +| `new_with_validation(replay)` | Playback with per-frame checksum validation | +| `advance_frame()` | Advance one frame, returns requests with recorded inputs. Returns an error if the replay is exhausted (all frames played). | +| `is_complete()` | `true` when all frames have been played | +| `current_frame()` | Current frame number (`Frame::NULL` before first advance) | +| `total_frames()` | Total frames in the replay | +| `is_validating()` | `true` if checksum validation mode is enabled | +| `replay()` | Reference to the underlying `Replay` | +| `events()` | Drain pending events (e.g., `ReplayDesync`) | + +### SessionBuilder Methods + +| Method | Description | +|--------|-------------| +| `with_recording(bool)` | Enable input recording on a P2P session | +| `start_replay_session(replay)` | Create a standard playback session | +| `start_replay_session_with_validation(replay)` | Create a validating playback session | + +### P2PSession Methods + +| Method | Description | +|--------|-------------| +| `is_recording()` | Check if recording is enabled | +| `into_replay()` | Consume the session and extract the `Replay` | +| `take_replay()` | Extract the `Replay` without consuming the session | + +--- + +## Use Cases + +- **Match review** -- Let players rewatch completed matches frame by frame +- **Determinism testing** -- Use validation mode to catch non-determinism bugs across builds or platforms +- **Streaming / broadcasting** -- Send compact replay data to spectators for synchronized playback +- **Bug reproduction** -- Attach replay files to bug reports for exact reproduction +- **Anti-cheat verification** -- Re-simulate matches server-side to verify client-reported results + +--- + +## Common Patterns + +### into_replay vs take_replay + +| Method | Consumes session? | Use when... | +|--------|:-:|---| +| `into_replay()` | Yes | Match is over, session no longer needed | +| `take_replay()` | No | Mid-match auto-save, or extracting replay while session continues | + +!!! note + `take_replay()` consumes the recorded data. A second call returns an error because the recording has already been taken. + +### Replay Browser with Metadata + +```rust +use fortress_rollback::replay::{Replay, ReplayMetadata}; + +// Store replays with searchable metadata +struct ReplayEntry { + path: std::path::PathBuf, + metadata: ReplayMetadata, +} + +// Load replays from a directory and extract metadata for display +fn list_replays(dir: &std::path::Path) -> Vec { + std::fs::read_dir(dir) + .into_iter() + .flatten() + .filter_map(|entry| { + let path = entry.ok()?.path(); + let bytes = std::fs::read(&path).ok()?; + let replay = Replay::::from_bytes(&bytes).ok()?; + Some(ReplayEntry { path, metadata: replay.metadata }) + }) + .collect() +} +``` + +--- + +!!! warning "Breaking Change" + `FortressEvent::ReplayDesync` is a new enum variant. Since `FortressEvent` is **not** `#[non_exhaustive]`, exhaustive `match` statements must be updated to handle this variant. Add a `FortressEvent::ReplayDesync { .. } => { .. }` arm to all existing matches. diff --git a/docs/specs/determinism-model.md b/docs/specs/determinism-model.md index 8d8d22af..25534031 100644 --- a/docs/specs/determinism-model.md +++ b/docs/specs/determinism-model.md @@ -140,7 +140,7 @@ fn spawn_enemy(state: &mut GameState, rng: &mut SeededRng) { ```rust // Must implement these traits pub trait Config: 'static { - type Input: Copy + Clone + PartialEq + Default + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned; // ... } diff --git a/docs/telemetry.md b/docs/telemetry.md new file mode 100644 index 00000000..097c9f1b --- /dev/null +++ b/docs/telemetry.md @@ -0,0 +1,336 @@ + + +

+ Fortress Rollback +

+ +# Session Telemetry + +Monitor P2P session performance with structured telemetry events. Track rollbacks, prediction misses, frame advances, and network stats in real time. + +## Table of Contents + +1. [Architecture](#architecture) +2. [Quick Start](#quick-start) +3. [`SessionTelemetry` Trait](#sessiontelemetry-trait) +4. [`TelemetryEvent` Enum](#telemetryevent-enum) +5. [`CollectingTelemetry` (Built-in)](#collectingtelemetry-built-in) +6. [Custom Telemetry Observer](#custom-telemetry-observer) +7. [Spec Violation Observability](#spec-violation-observability) + - [`ViolationObserver` Trait](#violationobserver-trait) + - [`SpecViolation` Struct](#specviolation-struct) + - [`CollectingObserver`](#collectingobserver) + - [`TracingObserver`](#tracingobserver) + - [`ViolationKind` Variants](#violationkind-variants) + - [`ViolationSeverity` Levels](#violationseverity-levels) +8. [Event Flow](#event-flow) +9. [Use Cases](#use-cases) +10. [Integration Tips](#integration-tips) +11. [See Also](#see-also) + +--- + +## Architecture + +```mermaid +graph TD + P2P["P2PSession"] -->|calls| ST["SessionTelemetry trait"] + + ST --> R["on_rollback"] + ST --> PM["on_prediction_miss"] + ST --> NS["on_network_stats"] + ST --> FA["on_frame_advance"] + + subgraph Built-in Implementations + CT["CollectingTelemetry
testing"] + CU["Custom impl
your own"] + end + + ST -.->|impl| CT + ST -.->|impl| CU +``` + +--- + +## Quick Start + +```rust +use fortress_rollback::telemetry::{CollectingTelemetry, SessionTelemetry}; +use fortress_rollback::SessionBuilder; +use std::sync::Arc; + +// 1. Create a telemetry observer +let telemetry = Arc::new(CollectingTelemetry::new()); + +// 2. Pass to session builder +// MyConfig: your Config impl (see user-guide.md) +let builder = SessionBuilder::::new() + .with_telemetry(telemetry.clone()); + +// 3. After running the session, inspect events +let rollbacks = telemetry.rollbacks(); +let misses = telemetry.prediction_misses(); +println!("Rollbacks: {}, Prediction misses: {}", rollbacks.len(), misses.len()); +``` + +--- + +## `SessionTelemetry` Trait + +```rust +// With `sync-send` feature enabled: +pub trait SessionTelemetry: Send + Sync { + fn on_rollback(&self, depth: usize, frame: Frame) { /* no-op */ } + fn on_prediction_miss(&self, player: PlayerHandle, frame: Frame) { /* no-op */ } + fn on_network_stats(&self, player: PlayerHandle, stats: &NetworkStats) { /* no-op */ } + fn on_frame_advance(&self, frame: Frame) { /* no-op */ } +} + +// Without `sync-send` feature: +pub trait SessionTelemetry { + // same methods, no Send + Sync bounds +} +``` + +!!! note + All methods have default no-op implementations. Override only what you need. The `Send + Sync` supertraits are only required when the `sync-send` feature is enabled. + +| Method | Parameters | When Called | +|--------|-----------|-------------| +| `on_rollback` | `depth: usize`, `frame: Frame` | State was rolled back | +| `on_prediction_miss` | `player: PlayerHandle`, `frame: Frame` | Predicted input was wrong | +| `on_network_stats` | `player: PlayerHandle`, `stats: &NetworkStats` | Network stats polled | +| `on_frame_advance` | `frame: Frame` | Frame advanced | + +--- + +## `TelemetryEvent` Enum + +Each variant captures the arguments from its corresponding trait method. + +| Variant | Fields | When | +|---------|--------|------| +| `Rollback` | `depth: usize`, `frame: Frame` | State was rolled back | +| `PredictionMiss` | `player: PlayerHandle`, `frame: Frame` | Predicted input was wrong | +| `NetworkStatsUpdate` | `player: PlayerHandle`, `stats: NetworkStats` | Network stats polled | +| `FrameAdvance` | `frame: Frame` | Frame advanced | + +--- + +## `CollectingTelemetry` (Built-in) + +Thread-safe observer that accumulates all events for later inspection. + +| Method | Returns | +|--------|---------| +| `new()` | Empty collector | +| `events()` | `Vec` -- all events | +| `rollbacks()` | `Vec` -- filtered rollback events | +| `prediction_misses()` | `Vec` -- filtered prediction misses | +| `network_stats_updates()` | `Vec` -- filtered network stats | +| `frame_advances()` | `Vec` -- filtered frame advances | +| `len()` | `usize` -- event count | +| `is_empty()` | `bool` -- no events collected? | +| `clear()` | Clear all collected events | + +--- + +## Custom Telemetry Observer + +Implement `SessionTelemetry` for your own metrics system: + +```rust +use fortress_rollback::telemetry::SessionTelemetry; +use fortress_rollback::{Frame, PlayerHandle}; +use fortress_rollback::NetworkStats; +use std::sync::atomic::{AtomicUsize, Ordering}; + +struct MetricsTelemetry { + rollback_count: AtomicUsize, + prediction_miss_count: AtomicUsize, +} + +impl SessionTelemetry for MetricsTelemetry { + fn on_rollback(&self, depth: usize, _frame: Frame) { + self.rollback_count.fetch_add(1, Ordering::Relaxed); + tracing::info!(depth, "rollback occurred"); + } + + fn on_prediction_miss(&self, player: PlayerHandle, frame: Frame) { + self.prediction_miss_count.fetch_add(1, Ordering::Relaxed); + tracing::debug!(%player, %frame, "prediction miss"); + } +} +``` + +--- + +## Spec Violation Observability + +The telemetry module also provides a structured pipeline for specification violations -- internal invariant failures detected at runtime. + +```mermaid +graph TD + LIB["Library internals"] -->|report_violation!| VO["ViolationObserver trait"] + + subgraph Implementations + TO["TracingObserver
default, logs via tracing"] + CO["CollectingObserver
testing"] + MO["CompositeObserver
multiple observers"] + end + + VO -.->|impl| TO + VO -.->|impl| CO + VO -.->|impl| MO +``` + +### `ViolationObserver` Trait + +```rust +// With `sync-send` feature enabled: +pub trait ViolationObserver: Send + Sync { + fn on_violation(&self, violation: &SpecViolation); +} + +// Without `sync-send` feature: +pub trait ViolationObserver { + // same method, no Send + Sync bounds +} +``` + +### `SpecViolation` Struct + +Each violation carries structured context: + +| Field | Type | +|-------|------| +| `severity` | `ViolationSeverity` | +| `kind` | `ViolationKind` | +| `message` | `String` | +| `location` | `&'static str` | +| `frame` | `Option` | +| `context` | `BTreeMap` | + +**Builder methods:** + +| Method | Description | +|--------|-------------| +| `new(severity, kind, message, location)` | Create a new violation | +| `with_frame(frame)` | Attach a frame reference | +| `with_context(key, value)` | Add a key-value context entry | +| `to_json()` | `Option` -- JSON string (requires `json` feature) | +| `to_json_pretty()` | `Option` -- pretty JSON string (requires `json` feature) | + +### `CollectingObserver` + +Thread-safe observer that accumulates all violations for later inspection. + +| Method | Returns | +|--------|---------| +| `new()` | Empty collector | +| `violations()` | `Vec` — all collected violations | +| `len()` | Number of violations | +| `is_empty()` | No violations collected? | +| `has_violation(kind)` | Any violation of this kind? | +| `has_severity(severity)` | Any violation at this severity? | +| `violations_of_kind(kind)` | Filtered by kind | +| `violations_at_severity(min)` | Filtered by minimum severity | +| `clear()` | Remove all collected violations | + +### `TracingObserver` + +Default observer that maps severity levels to tracing log levels: `Warning` → `tracing::warn!`, `Error`/`Critical` → `tracing::error!`. All fields are emitted as structured tracing fields. + +### Plugging In + +```rust +use fortress_rollback::telemetry::CollectingObserver; +use fortress_rollback::SessionBuilder; +use std::sync::Arc; + +let observer = Arc::new(CollectingObserver::new()); +// MyConfig: your Config impl (see user-guide.md) +let builder = SessionBuilder::::new() + .with_violation_observer(observer.clone()); + +// After session operations +assert!(observer.violations().is_empty(), "unexpected violations"); +``` + +### `ViolationKind` Variants + +| Variant | Description | +|---------|-------------| +| `FrameSync` | Frame synchronization invariant violated | +| `InputQueue` | Input queue invariant violated | +| `StateManagement` | State save/load invariant violated | +| `NetworkProtocol` | Network protocol invariant violated | +| `ChecksumMismatch` | Checksum or desync detection issue | +| `Configuration` | Configuration constraint violated | +| `InternalError` | Internal logic error (library bug) | +| `Invariant` | Runtime invariant check failed | +| `Synchronization` | Sync protocol issues | +| `ArithmeticOverflow` | Arithmetic overflow detected | + +### `ViolationSeverity` Levels + +| Level | Meaning | +|-------|---------| +| `Warning` | Unexpected but recoverable -- operation continued with fallback | +| `Error` | Serious issue -- operation may have degraded behavior | +| `Critical` | Critical invariant broken -- state may be corrupted | + +--- + +## Event Flow + +```mermaid +sequenceDiagram + participant Game as Game Loop + participant Session as P2PSession + participant Telemetry as SessionTelemetry + + Game->>Session: advance_frame() + Session->>Telemetry: on_frame_advance(frame) + + Note over Session: Remote input arrives late + Session->>Telemetry: on_prediction_miss(player, frame) + Session->>Telemetry: on_rollback(depth, target_frame) + + Note over Session: Re-simulate frames + loop For each re-simulated frame + Session->>Telemetry: on_frame_advance(frame) + end + + Game->>Session: network_stats(player) + Session->>Telemetry: on_network_stats(player, stats) +``` + +--- + +## Use Cases + +- **Performance monitoring** -- Track rollback frequency and prediction accuracy over time +- **Network quality dashboards** -- Aggregate `NetworkStatsUpdate` events per peer +- **Automated testing assertions** -- Use `CollectingTelemetry` to assert rollback counts, prediction accuracy +- **Debug overlays** -- Display rollback count, ping, and frame advantage in a HUD + +--- + +## Integration Tips + +!!! performance + Keep observer callbacks fast -- they run inline during frame processing. Offload heavy work (file I/O, network sends) to a background thread. + +!!! tip + Use `Arc` for testing, a custom `SessionTelemetry` impl for production. + +!!! safety + Both `SessionTelemetry` and `ViolationObserver` require `Send + Sync` when the `sync-send` feature is enabled. The `sync-send` feature is not a default feature and must be explicitly opted into. All built-in implementations are thread-safe regardless of feature flags. + +--- + +## See Also + +- [User Guide](user-guide.md) — integrating Fortress Rollback into your game +- [Architecture Overview](architecture.md) — system design and module structure diff --git a/docs/user-guide.md b/docs/user-guide.md index df4de3d9..fcd69330 100644 --- a/docs/user-guide.md +++ b/docs/user-guide.md @@ -67,7 +67,7 @@ use serde::{Deserialize, Serialize}; use std::net::SocketAddr; // 1. Define your input type -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct MyInput { buttons: u8, } @@ -164,7 +164,7 @@ use std::net::SocketAddr; // Your input type - sent over the network #[repr(C)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] pub struct GameInput { pub buttons: u8, pub stick_x: i8, @@ -193,7 +193,7 @@ impl Config for GameConfig { Your input type must: -- Be `Copy + Clone + PartialEq` +- Be `Copy + Clone + PartialEq + Eq` - Implement `Default` (used for disconnected players) - Implement `Serialize + Deserialize` (for network transmission) @@ -1888,7 +1888,7 @@ fortress-rollback = { version = "0.6", features = ["sync-send"] } ```rust pub trait Config: 'static { - type Input: Copy + Clone + PartialEq + Default + Serialize + DeserializeOwned; + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned; type State; type Address: Clone + PartialEq + Eq + PartialOrd + Ord + Hash + Debug; } @@ -1898,7 +1898,7 @@ pub trait Config: 'static { ```rust pub trait Config: 'static + Send + Sync { - type Input: Copy + Clone + PartialEq + Default + Serialize + DeserializeOwned + Send + Sync; + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned + Send + Sync; type State: Clone + Send + Sync; type Address: Clone + PartialEq + Eq + PartialOrd + Ord + Hash + Send + Sync + Debug; } diff --git a/examples/ex_game/ex_game.rs b/examples/ex_game/ex_game.rs index 7836f4a6..9a6e0272 100644 --- a/examples/ex_game/ex_game.rs +++ b/examples/ex_game/ex_game.rs @@ -26,7 +26,7 @@ const MAX_SPEED: f32 = 7.0; const FRICTION: f32 = 0.98; #[repr(C)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] pub struct Input { pub inp: u8, } diff --git a/examples/sync_test.rs b/examples/sync_test.rs index 2fe9927e..2819eb3d 100644 --- a/examples/sync_test.rs +++ b/examples/sync_test.rs @@ -106,7 +106,7 @@ impl CounterState { /// - `PartialEq` - for prediction comparison /// - `Default` - for "no input" / disconnected state /// - `Serialize` + `Deserialize` - for network transmission -#[derive(Copy, Clone, PartialEq, Default, Debug, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Debug, Serialize, Deserialize)] struct CounterInput { /// Whether to add to the counter this frame increment: bool, diff --git a/fuzz/fuzz_targets/fuzz_input_queue_direct.rs b/fuzz/fuzz_targets/fuzz_input_queue_direct.rs index 9f0933fe..5821b255 100644 --- a/fuzz/fuzz_targets/fuzz_input_queue_direct.rs +++ b/fuzz/fuzz_targets/fuzz_input_queue_direct.rs @@ -26,7 +26,7 @@ use std::net::SocketAddr; /// Test input configuration #[repr(C)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { value: u8, } diff --git a/mkdocs.yml b/mkdocs.yml index 38517303..d6ee19fc 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -208,6 +208,8 @@ nav: - Home: index.md - Getting Started: - User Guide: user-guide.md + - Match Replay: replay.md + - Session Telemetry: telemetry.md - Migration Guide: migration.md - Architecture: - Overview: architecture.md diff --git a/scripts/ci/check-changelog-format.sh b/scripts/ci/check-changelog-format.sh new file mode 100755 index 00000000..b68302e8 --- /dev/null +++ b/scripts/ci/check-changelog-format.sh @@ -0,0 +1,167 @@ +#!/bin/bash +# Changelog Format Check for Fortress Rollback +# +# Validates that CHANGELOG.md follows the Keep a Changelog convention: +# all ### level headings within version sections must be one of the six +# standard types: Added, Changed, Deprecated, Removed, Fixed, Security. +# +# Non-standard headings (e.g., "### Breaking") are flagged as errors with +# a suggestion to use the appropriate standard heading instead. +# +# Usage: ./scripts/ci/check-changelog-format.sh [options] +# ./scripts/ci/check-changelog-format.sh # Check CHANGELOG.md +# ./scripts/ci/check-changelog-format.sh --verbose # Show all headings checked +# ./scripts/ci/check-changelog-format.sh --help # Show help +# +# Exit codes: +# 0 - No issues found +# 1 - Non-standard headings detected + +set -euo pipefail + +# Configuration +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" +CHANGELOG="$PROJECT_ROOT/CHANGELOG.md" + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' # No Color + +# Standard Keep a Changelog heading types +VALID_HEADINGS="Added|Changed|Deprecated|Removed|Fixed|Security" + +# Options +VERBOSE=false + +print_usage() { + echo "Usage: $0 [options]" + echo "" + echo "Options:" + echo " --verbose Show all headings checked" + echo " --help Show this help message" + echo "" + echo "Validates that CHANGELOG.md uses only standard Keep a Changelog" + echo "section headings: Added, Changed, Deprecated, Removed, Fixed, Security." + echo "" + echo "Non-standard headings like '### Breaking' should use '### Changed'" + echo "with a '**Breaking:**' prefix on each entry instead." +} + +main() { + # Parse arguments + while [[ $# -gt 0 ]]; do + case "$1" in + --verbose) + VERBOSE=true + shift + ;; + --help) + print_usage + exit 0 + ;; + *) + echo "Unknown argument: $1" + print_usage + exit 1 + ;; + esac + done + + echo "==========================================" + echo " Changelog Format Check" + echo "==========================================" + echo "" + + if [[ ! -f "$CHANGELOG" ]]; then + echo -e "${RED}ERROR: CHANGELOG.md not found at $CHANGELOG${NC}" + exit 1 + fi + + local issues=0 + local headings_checked=0 + local line_num=0 + local in_version_section=false + local past_separator=false + + while IFS= read -r line; do + line_num=$((line_num + 1)) + + # Detect the --- separator that ends the version sections. + # After this point, headings belong to non-version content (e.g., + # the "Breaking Changes from GGRS" migration guide) and should + # not be validated against Keep a Changelog conventions. + if [[ "$line" =~ ^---[[:space:]]*$ ]]; then + past_separator=true + continue + fi + + # Stop checking once we're past the separator + if [[ "$past_separator" == "true" ]]; then + continue + fi + + # Detect ## [version] headers (version sections) + if [[ "$line" =~ ^##[[:space:]]+\[ ]]; then + in_version_section=true + continue + fi + + # Detect other ## headers (non-version, e.g., top-level "# Changelog") + # A bare ## without [ after it means we left a version section + if [[ "$line" =~ ^##[[:space:]] ]] && ! [[ "$line" =~ ^##[[:space:]]+\[ ]]; then + in_version_section=false + continue + fi + + # Only check ### headings inside version sections + if [[ "$in_version_section" == "true" ]] && [[ "$line" =~ ^###[[:space:]] ]]; then + # Extract the heading text (everything after "### ") + local heading="${line#\#\#\# }" + + headings_checked=$((headings_checked + 1)) + + if [[ "$heading" =~ ^($VALID_HEADINGS)$ ]]; then + if [[ "$VERBOSE" == "true" ]]; then + echo -e " ${GREEN}OK${NC}: Line $line_num: ### $heading" + fi + else + issues=$((issues + 1)) + echo -e "${RED}ERROR${NC}: Line $line_num: non-standard heading '### $heading'" + + # Provide specific guidance for common mistakes + if [[ "$heading" == "Breaking" ]]; then + echo -e " ${BLUE}Fix:${NC} Change '### Breaking' to '### Changed'" + echo -e " and prefix each entry with '**Breaking:**' (entries may already have this prefix)." + else + echo -e " ${BLUE}Fix:${NC} Use one of the standard Keep a Changelog headings:" + echo -e " Added, Changed, Deprecated, Removed, Fixed, Security" + fi + echo "" + fi + fi + + done < "$CHANGELOG" + + echo "" + + if [[ "$issues" -eq 0 ]]; then + echo -e "${GREEN}SUCCESS: All $headings_checked section heading(s) use standard Keep a Changelog types.${NC}" + echo -e " Valid types: Added, Changed, Deprecated, Removed, Fixed, Security" + exit 0 + fi + + echo -e "${RED}FAILED: $issues non-standard heading(s) found in CHANGELOG.md.${NC}" + echo "" + echo "All ### section headings within version entries must be one of:" + echo " Added, Changed, Deprecated, Removed, Fixed, Security" + echo "" + echo "For breaking changes, use '### Changed' with '**Breaking:**' prefix on each entry." + echo "See https://keepachangelog.com/en/1.1.0/ for the full specification." + exit 1 +} + +main "$@" diff --git a/scripts/ci/check-derive-bounds.sh b/scripts/ci/check-derive-bounds.sh new file mode 100755 index 00000000..aee9ff3a --- /dev/null +++ b/scripts/ci/check-derive-bounds.sh @@ -0,0 +1,278 @@ +#!/bin/bash +# Derive Bounds Check for Fortress Rollback +# +# Detects overly-strict derive bounds on generic types. Specifically, flags +# cases where `Eq` is derived on a public struct/enum with generic type +# parameters but the generic bounds don't actually require `Eq`. This means +# the derive is silently adding `Eq` bounds beyond what the type needs, +# preventing it from being used with types that implement `PartialEq` but +# not `Eq`. +# +# Usage: ./scripts/ci/check-derive-bounds.sh [options] +# ./scripts/ci/check-derive-bounds.sh # Check all Rust files +# ./scripts/ci/check-derive-bounds.sh --verbose # Show all types checked +# ./scripts/ci/check-derive-bounds.sh --help # Show help +# +# Exit codes: +# 0 - No issues found +# 1 - Overly-strict derive bounds detected + +set -euo pipefail + +# Configuration +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' # No Color + +# Options +VERBOSE=false + +print_usage() { + echo "Usage: $0 [options]" + echo "" + echo "Options:" + echo " --verbose Show all types checked" + echo " --help Show this help message" + echo "" + echo "Detects overly-strict derive bounds on generic types." + echo "Flags #[derive(Eq)] on public generic types whose bounds" + echo "don't require Eq." +} + +# Check whether text contains a standalone token using POSIX ERE-safe +# boundaries (portable across GNU/BSD grep). +contains_token() { + local text="$1" + local token="$2" + token=$(printf '%s' "$token" | sed 's/[][(){}.^$*+?|\\]/\\&/g') + echo "$text" | grep -qE "(^|[^[:alnum:]_])${token}([^[:alnum:]_]|$)" +} + +main() { + # Parse arguments + while [[ $# -gt 0 ]]; do + case "$1" in + --verbose) + VERBOSE=true + shift + ;; + --help) + print_usage + exit 0 + ;; + *) + echo "Unknown argument: $1" + print_usage + exit 1 + ;; + esac + done + + echo "==========================================" + echo " Derive Bounds Check" + echo "==========================================" + echo "" + + local issues=0 + local types_checked=0 + + # Process all Rust source files under src/ + while IFS= read -r file; do + [[ -z "$file" ]] && continue + + local rel_path="${file#"$PROJECT_ROOT/"}" + local total_lines + total_lines=$(wc -l < "$file") + + # Find lines with #[derive(] and collect the full derive text, + # handling both single-line and multi-line derives. + # Output format: "start_lineno:end_lineno:full derive text (joined on one line)" + local derive_lines="" + local derive_start_lines + derive_start_lines=$(grep -n '#\[derive(' "$file" 2>/dev/null || true) + [[ -z "$derive_start_lines" ]] && continue + + while IFS= read -r start_match; do + [[ -z "$start_match" ]] && continue + local start_lineno + start_lineno=$(echo "$start_match" | cut -d: -f1) + local start_text + start_text=$(echo "$start_match" | cut -d: -f2-) + + local full_derive_text="$start_text" + local derive_end_lineno="$start_lineno" + # If the closing )] is not on the same line, collect subsequent lines + if ! echo "$start_text" | grep -qF ')'; then + local scan_line=$((start_lineno + 1)) + while [[ "$scan_line" -le "$total_lines" ]]; do + local next_line + next_line=$(sed -n "${scan_line}p" "$file") + full_derive_text="$full_derive_text $next_line" + if echo "$next_line" | grep -qF ')'; then + derive_end_lineno="$scan_line" + break + fi + scan_line=$((scan_line + 1)) + done + fi + + # Check if the full derive text contains standalone Eq (not just PartialEq) + local without_partial_check + without_partial_check=$(echo "$full_derive_text" | sed 's/PartialEq//g') + if contains_token "$without_partial_check" 'Eq'; then + # Format: start_lineno:end_lineno:full derive text + derive_lines+="${start_lineno}:${derive_end_lineno}:${full_derive_text}"$'\n' + fi + done <<< "$derive_start_lines" + + # Trim trailing newline and skip if empty + derive_lines=$(echo "$derive_lines" | sed '/^$/d') + [[ -z "$derive_lines" ]] && continue + + while IFS= read -r derive_match; do + [[ -z "$derive_match" ]] && continue + + local derive_lineno + derive_lineno=$(echo "$derive_match" | cut -d: -f1) + local derive_end + derive_end=$(echo "$derive_match" | cut -d: -f2) + local derive_text + derive_text=$(echo "$derive_match" | cut -d: -f3-) + + # Skip if this derive line doesn't actually contain standalone Eq + # (not just PartialEq). We need Eq as a separate token. + # Remove PartialEq first, then check for Eq + local without_partial + without_partial=$(echo "$derive_text" | sed 's/PartialEq//g') + if ! contains_token "$without_partial" 'Eq'; then + continue + fi + + # Skip if the derive line has a "derive-bounds:ok" suppression comment. + # Use this for types where Eq is intentional despite no explicit bounds + # (e.g., types always used with Config::Input which requires Eq). + if echo "$derive_text" | grep -qF 'derive-bounds:ok'; then + if [[ "$VERBOSE" == "true" ]]; then + echo -e " ${GREEN}SKIP${NC}: $rel_path:$derive_lineno (suppressed via derive-bounds:ok)" + fi + continue + fi + + # Look at the next few lines after the derive for a pub struct/enum + # with generics. Use derive_end to skip past multi-line derives. + local search_end=$((derive_end + 5)) + if [[ "$search_end" -gt "$total_lines" ]]; then + search_end="$total_lines" + fi + + local following + following=$(sed -n "$((derive_end + 1)),${search_end}p" "$file") + + # Check for pub struct/enum with generic parameters <...> + local type_line + type_line=$(echo "$following" | grep -E '^[[:space:]]*pub[[:space:]]+(struct|enum)[[:space:]]+[[:alnum:]_]+[[:space:]]*<' | head -1 || true) + + if [[ -z "$type_line" ]]; then + if [[ "$VERBOSE" == "true" ]]; then + echo -e " ${GREEN}SKIP${NC}: $rel_path:$derive_lineno (not a public generic type)" + fi + continue + fi + + types_checked=$((types_checked + 1)) + + # Extract the type name + local type_name + type_name=$(echo "$type_line" | sed -E 's/^[[:space:]]*pub[[:space:]]+(struct|enum)[[:space:]]+([[:alnum:]_]+).*/\2/') + + # Now check if there's a where clause or inline bounds requiring Eq + # We look at the type definition and any where clause up to the opening brace + local def_start=$((derive_end + 1)) + local def_end=$((derive_end + 15)) + if [[ "$def_end" -gt "$total_lines" ]]; then + def_end="$total_lines" + fi + + local type_block + type_block=$(sed -n "${def_start},${def_end}p" "$file") + + # Flatten the block into a single line for multi-line where clause matching + local flat_block + flat_block=$(echo "$type_block" | tr '\n' ' ') + + # Check if Eq appears in bounds (where clause or inline bounds) + # Look for patterns like: T: Eq, T: ... + Eq, where ... Eq + local has_eq_bound=false + local eq_bound_reason="" + + # Check inline bounds on the generic parameter, e.g. or + local inline_generics + inline_generics=$(echo "$flat_block" | grep -oE '<[^>]*>' || true) + if [[ -n "$inline_generics" ]] && contains_token "$inline_generics" 'Eq'; then + has_eq_bound=true + eq_bound_reason="inline generic bound" + fi + + # Check where clause for Eq bound (flattened, so multi-line where clauses work) + if [[ "$has_eq_bound" == "false" ]] && \ + echo "$flat_block" | grep -qE '(^|[[:space:]])where[[:space:]]' && \ + contains_token "$flat_block" 'Eq'; then + has_eq_bound=true + eq_bound_reason="where-clause bound" + fi + + # Check if generic parameter is bounded by a trait whose associated types + # require Eq (e.g., T: Config where Config::Address: Eq). We recognize + # the Config trait specifically since it's this crate's main trait. + if [[ "$has_eq_bound" == "false" ]] && \ + echo "$flat_block" | grep -qE '(^|[^[:alnum:]_])[[:alnum:]_]+[[:space:]]*:[[:space:]]*Config([^[:alnum:]_]|$)'; then + has_eq_bound=true + eq_bound_reason="Config trait bound" + fi + + if [[ "$has_eq_bound" == "true" ]]; then + if [[ "$VERBOSE" == "true" ]]; then + echo -e " ${GREEN}OK${NC}: $rel_path:$derive_lineno $type_name (Eq bound present via $eq_bound_reason)" + fi + else + issues=$((issues + 1)) + echo "" + echo -e "${RED}ERROR${NC}: $rel_path:$derive_lineno" + echo -e " Type ${YELLOW}${type_name}${NC} derives Eq but its generic bounds don't require Eq." + echo -e " ${BLUE}Derive line:${NC} $derive_text" + echo -e " ${BLUE}Type line:${NC} $(echo "$type_line" | sed 's/^[[:space:]]*//')" + echo -e " ${BLUE}Analyzed bounds:${NC} $(echo "$flat_block" | tr -s '[:space:]' ' ' | cut -c1-200)" + echo -e " ${BLUE}Fix:${NC} Remove Eq from the derive, or add Eq to the generic bounds." + echo -e " Deriving Eq on a generic type adds an implicit Eq bound on all" + echo -e " type parameters, which may be stricter than necessary." + fi + + done <<< "$derive_lines" + + done < <(find "$PROJECT_ROOT/src" -name '*.rs' -print 2>/dev/null | sort) + + echo "" + + if [[ "$issues" -eq 0 ]]; then + echo -e "${GREEN}SUCCESS: No overly-strict derive bounds found.${NC}" + if [[ "$types_checked" -gt 0 ]]; then + echo -e " ($types_checked public generic type(s) with Eq checked)" + fi + exit 0 + fi + + echo -e "${RED}FAILED: $issues type(s) have overly-strict derive bounds.${NC}" + echo "" + echo "When a generic type derives Eq, it adds an implicit I: Eq bound." + echo "If the type's explicit bounds only require PartialEq, the Eq derive" + echo "is overly strict and prevents use with PartialEq-only types." + exit 1 +} + +main "$@" diff --git a/scripts/ci/check-doc-claims.sh b/scripts/ci/check-doc-claims.sh new file mode 100755 index 00000000..003380f2 --- /dev/null +++ b/scripts/ci/check-doc-claims.sh @@ -0,0 +1,148 @@ +#!/bin/bash +# Doc Comment Accuracy Check for Fortress Rollback +# +# Checks that doc comments mentioning "downcast" are backed by actual +# downcasting infrastructure in the same file. This prevents misleading +# documentation that references capabilities the code doesn't support. +# +# Usage: ./scripts/ci/check-doc-claims.sh [options] +# ./scripts/ci/check-doc-claims.sh # Check all Rust files +# ./scripts/ci/check-doc-claims.sh --verbose # Show all files checked +# ./scripts/ci/check-doc-claims.sh --help # Show help +# +# Exit codes: +# 0 - No issues found +# 1 - Misleading doc comments detected + +set -euo pipefail + +# Configuration +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" +PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" + +# Colors +RED='\033[0;31m' +GREEN='\033[0;32m' +YELLOW='\033[1;33m' +BLUE='\033[0;34m' +NC='\033[0m' # No Color + +# Options +VERBOSE=false + +print_usage() { + echo "Usage: $0 [options]" + echo "" + echo "Options:" + echo " --verbose Show all files checked" + echo " --help Show this help message" + echo "" + echo "Checks doc comments for claims about downcasting that aren't" + echo "backed by actual downcasting infrastructure in the same file." +} + +main() { + # Parse arguments + while [[ $# -gt 0 ]]; do + case "$1" in + --verbose) + VERBOSE=true + shift + ;; + --help) + print_usage + exit 0 + ;; + *) + echo "Unknown argument: $1" + print_usage + exit 1 + ;; + esac + done + + echo "==========================================" + echo " Doc Comment Accuracy Check" + echo "==========================================" + echo "" + + # Patterns that indicate actual downcasting infrastructure. + # Use POSIX ERE-safe token boundaries for portability across GNU/BSD grep. + # If a file mentions downcasting in docs, it should contain at least one of these. + local downcast_infra_patterns='(as_any|downcast_ref|downcast_mut|dyn Any|: Any|impl Any|Any \+|Any\+|\.downcast([^[:alnum:]_]|$))' + + local issues=0 + local files_with_claims=0 + + # Find all Rust source files (excluding target directories) + while IFS= read -r file; do + [[ -z "$file" ]] && continue + + local rel_path="${file#"$PROJECT_ROOT/"}" + + # Find doc comment lines mentioning "downcast" (case-insensitive) + local doc_matches + doc_matches=$(grep -niE '^[[:space:]]*///.*downcast|^[[:space:]]*//!.*downcast' "$file" 2>/dev/null || true) + + if [[ -z "$doc_matches" ]]; then + if [[ "$VERBOSE" == "true" ]]; then + echo -e " ${GREEN}OK${NC}: $rel_path (no downcast doc claims)" + fi + continue + fi + + files_with_claims=$((files_with_claims + 1)) + + if [[ "$VERBOSE" == "true" ]]; then + echo -e " ${YELLOW}Checking${NC}: $rel_path (has downcast doc claims)" + fi + + # Check if the file has actual downcasting infrastructure + local infra_matches + infra_matches=$(grep -nE "$downcast_infra_patterns" "$file" 2>/dev/null || true) + + local has_infra + has_infra=$(echo "$infra_matches" | grep -cE '^[0-9]+:' || true) + has_infra=${has_infra:-0} + + if [[ "$has_infra" -eq 0 ]]; then + issues=$((issues + 1)) + echo "" + echo -e "${RED}ERROR${NC}: $rel_path" + echo -e " Doc comments mention \"downcast\" but no downcasting infrastructure found." + echo -e " ${YELLOW}Doc comment(s):${NC}" + while IFS= read -r match_line; do + echo -e " $match_line" + done <<< "$doc_matches" + echo -e " ${BLUE}Expected one of:${NC} as_any, downcast_ref, downcast_mut, dyn Any, : Any" + echo -e " ${BLUE}Fix:${NC} Either add downcasting support or update the doc comment" + echo -e " to accurately describe the actual pattern used." + else + if [[ "$VERBOSE" == "true" ]]; then + echo -e " ${GREEN}OK${NC}: downcasting infrastructure found ($has_infra occurrence(s))" + echo "$infra_matches" | head -3 | sed 's/^/ match: /' + fi + fi + + done < <(find "$PROJECT_ROOT/src" "$PROJECT_ROOT/tests" "$PROJECT_ROOT/examples" "$PROJECT_ROOT/benches" \ + -name '*.rs' -print 2>/dev/null \ + | sort) + + echo "" + + if [[ "$issues" -eq 0 ]]; then + echo -e "${GREEN}SUCCESS: No misleading downcast doc claims found.${NC}" + if [[ "$files_with_claims" -gt 0 ]]; then + echo -e " ($files_with_claims file(s) with downcast references verified)" + fi + exit 0 + fi + + echo -e "${RED}FAILED: $issues file(s) have misleading downcast doc claims.${NC}" + echo "" + echo "Doc comments should accurately describe the code's capabilities." + echo "If downcasting isn't supported, describe the actual pattern instead." + exit 1 +} + +main "$@" diff --git a/scripts/ci/check-shell-portability.sh b/scripts/ci/check-shell-portability.sh index 202f5d86..98c1db3f 100755 --- a/scripts/ci/check-shell-portability.sh +++ b/scripts/ci/check-shell-portability.sh @@ -51,6 +51,29 @@ collect_shell_files() { | sort } +# Return the first non-portable PCRE-style escape found in a line, +# or nothing if no such escape appears. +first_nonportable_escape() { + local line="$1" + local escape + for escape in '\b' '\B' '\s' '\S' '\w' '\W' '\d' '\D' '\<' '\>'; do + if [[ "$line" == *"$escape"* ]]; then + printf '%s' "$escape" + return 0 + fi + done + return 1 +} + +# Match shell variable/array assignments that can store regex patterns, +# including append assignments like PATTERNS+='...'. +# Intentionally excludes forms with spaces around '=' because those are not +# valid shell assignments (they are command invocations in POSIX shells). +is_regex_assignment_line() { + local line="$1" + [[ "$line" =~ ^[[:space:]]*[[:alpha:]_][[:alnum:]_]*(\[[^]]+\])?(\+)?= ]] +} + # Check a single file for portability issues. # Appends findings to the ISSUES array (global). # Arguments: $1 = file path @@ -97,6 +120,28 @@ check_file() { SUGGESTIONS+=(" ${BLUE}Fix:${NC} Use 'grep -E' with 'sed' for post-processing instead of grep -P") fi + # (b2) Non-portable PCRE-style escapes with POSIX ERE (-E) + # grep -E and sed -E do not portably support escapes like \b, \s, \w. + # GNU tools may accept them as extensions, but BSD/macOS often do not. + local bad_escape + bad_escape=$(first_nonportable_escape "$line" || true) + if [[ -n "$bad_escape" ]] && \ + echo "$line" | grep -qE '(^|[[:space:];|&])(grep|sed)[[:space:]]+-[a-zA-Z]*E([[:space:]]|$)'; then + ISSUES+=("${rel_path}:${line_num}: Non-portable regex escape ${bad_escape} with ERE (-E)") + DETAILS+=(" ${YELLOW}Line:${NC} $line") + SUGGESTIONS+=(" ${BLUE}Fix:${NC} Use POSIX classes/boundaries: [[:space:]] for \\s, [[:alnum:]_] for \\w, and (^|[^[:alnum:]_])word([^[:alnum:]_]|$) for \\bword\\b") + fi + + # (b3) Variable assignments containing non-portable escapes. + # This catches indirection patterns like: + # pattern='\\bword\\b'; grep -E "$pattern" file + # which can otherwise bypass single-line grep/sed command checks. + if [[ -n "$bad_escape" ]] && is_regex_assignment_line "$line"; then + ISSUES+=("${rel_path}:${line_num}: Non-portable regex escape ${bad_escape} in pattern assignment") + DETAILS+=(" ${YELLOW}Line:${NC} $line") + SUGGESTIONS+=(" ${BLUE}Fix:${NC} Keep regex assignments POSIX-safe too: [[:space:]] not \\s, [[:alnum:]_] not \\w, and (^|[^[:alnum:]_])word([^[:alnum:]_]|$) not \\bword\\b") + fi + # (c) Hardcoded /bin/sed or /usr/bin/sed paths if [[ "$line" =~ /bin/sed[[:space:]] ]] || [[ "$line" =~ /usr/bin/sed[[:space:]] ]]; then ISSUES+=("${rel_path}:${line_num}: Hardcoded sed path") diff --git a/scripts/docs/check-links.sh b/scripts/docs/check-links.sh index d0a6fb3d..13a28939 100755 --- a/scripts/docs/check-links.sh +++ b/scripts/docs/check-links.sh @@ -231,7 +231,7 @@ check_rust_doc_links() { # Extract links from doc comments: /// [text](link) or //! [text](link) local doc_links - doc_links=$(grep -E '^\s*(///|//!)' "$file" 2>/dev/null | grep -oE '\[([^]]*)\]\(([^)]+)\)' | sed 's/\[.*\](\(.*\))/\1/' | sed 's/)$//') + doc_links=$(grep -E '^[[:space:]]*(///|//!)' "$file" 2>/dev/null | grep -oE '\[([^]]*)\]\(([^)]+)\)' | sed 's/\[.*\](\(.*\))/\1/' | sed 's/)$//') for link in $doc_links; do # Skip intra-doc links: diff --git a/scripts/docs/check-llm-skills.sh b/scripts/docs/check-llm-skills.sh index 71544c37..05475fe8 100755 --- a/scripts/docs/check-llm-skills.sh +++ b/scripts/docs/check-llm-skills.sh @@ -155,9 +155,9 @@ check_unguarded_unwrap() { fi # Check if the current line is a pure Rust comment - if echo "$line" | grep -qE '^\s*//'; then + if echo "$line" | grep -qE '^[[:space:]]*//'; then # Track justification comments for nearby code lines - if echo "$line" | grep -qE '//\s*(build\.rs:|test:|Loom test:|Fuzz target|proptest:|allowed:|SAFETY:|In tests:)'; then + if echo "$line" | grep -qE '//[[:space:]]*(build\.rs:|test:|Loom test:|Fuzz target|proptest:|allowed:|SAFETY:|In tests:)'; then last_justification_at=$line_num fi # Skip -- mentioning .unwrap() in a comment is not executable code @@ -165,7 +165,7 @@ check_unguarded_unwrap() { fi # Check if the current line is an accepted attribute - if echo "$line" | grep -qE '^\s*#\[(allow\(|test|fixture|cfg\(test\))'; then + if echo "$line" | grep -qE '^[[:space:]]*#\[(allow\(|test|fixture|cfg\(test\))'; then last_justification_at=$line_num fi @@ -174,7 +174,7 @@ check_unguarded_unwrap() { local justified=0 # Check if there's a justifying comment on the same line - if echo "$line" | grep -qE '//\s*(build\.rs:|test:|Loom test:|Fuzz target|proptest:|allowed:|SAFETY:|In tests:)'; then + if echo "$line" | grep -qE '//[[:space:]]*(build\.rs:|test:|Loom test:|Fuzz target|proptest:|allowed:|SAFETY:|In tests:)'; then justified=1 fi diff --git a/scripts/docs/sync-wiki.py b/scripts/docs/sync-wiki.py index aafd2df1..9d5b3410 100644 --- a/scripts/docs/sync-wiki.py +++ b/scripts/docs/sync-wiki.py @@ -86,6 +86,8 @@ "fortress-vs-ggrs.md": "Fortress-vs-GGRS", "ggrs-changelog-archive.md": "GGRS-Changelog-Archive", "tlaplus-tooling-research.md": "TLAplus-Tooling-Research", + "replay.md": "Replay", + "telemetry.md": "Telemetry", # Specs directory "specs/formal-spec.md": "Formal-Specification", "specs/determinism-model.md": "Determinism-Model", @@ -106,6 +108,7 @@ GITHUB_REPO_URL = "https://github.com/wallstop/fortress-rollback" GITHUB_BLOB_URL = f"{GITHUB_REPO_URL}/blob/main" GITHUB_RAW_URL = "https://raw.githubusercontent.com/wallstop/fortress-rollback/main" +SYNC_HEADER_RE = re.compile(r"^\s*$") class LinkMatch(NamedTuple): @@ -484,6 +487,25 @@ def strip_mkdocs_frontmatter(content: str) -> str: return content +def ensure_wiki_sync_header(content: str, docs_rel_path: str) -> str: + """Ensure wiki output has a reciprocal SYNC header pointing to docs source.""" + normalized_docs_path = normalize_path(docs_rel_path) + header = ( + "" + ) + + lines = content.splitlines() + # Strip ALL existing SYNC headers so the canonical one is always line 1 + lines = [line for line in lines if not SYNC_HEADER_RE.match(line.strip())] + + body = "\n".join(lines).lstrip("\r\n") + if not body: + return header + + return f"{header}\n\n{body}" + + def convert_grid_cards_to_list(content: str) -> str: """Convert Material grid cards divs to markdown list format. @@ -928,14 +950,31 @@ def read_file_safe(path: Path) -> str | None: return None +def normalize_generated_text(content: str) -> str: + """Normalize generated text to a stable EOF format. + + This mirrors end-of-file-fixer behavior for trailing whitespace/newlines, + ensuring generated files always end with exactly one LF when non-empty. + """ + if not content: + return "" + return content.rstrip("\r\n \t") + "\n" + + def write_file_safe(path: Path, content: str, dry_run: bool = False) -> bool: """Safely write a file, returning False on error.""" if dry_run: logger.info(f" [DRY RUN] Would write: {path}") return True + + normalized_content = normalize_generated_text(content) + + if content != normalized_content: + logger.debug(f" Normalized EOF newline: {path}") + try: path.parent.mkdir(parents=True, exist_ok=True) - path.write_text(content, encoding="utf-8") + path.write_text(normalized_content, encoding="utf-8") return True except OSError as e: logger.error(f"Error writing {path}: {e}") @@ -970,6 +1009,7 @@ def process_file( content = convert_links(content, relative_path, wiki_structure) content = convert_asset_paths(content, relative_path) content = strip_mkdocs_features(content) + content = ensure_wiki_sync_header(content, relative_path) # Write to wiki dest_path = wiki_dir / f"{wiki_name}.md" @@ -1056,6 +1096,8 @@ def generate_sidebar(wiki_structure: dict[str, str]) -> str: ## Documentation - [User Guide](User-Guide) +- [Match Replay](Replay) +- [Session Telemetry](Telemetry) - [Architecture](Architecture) - [Migration](Migration) @@ -1086,6 +1128,21 @@ def generate_sidebar(wiki_structure: dict[str, str]) -> str: return sidebar +def find_sidebar_missing_pages(sidebar: str, wiki_structure: dict[str, str]) -> list[str]: + """Return wiki page names that are mapped but absent from the sidebar.""" + linked_targets: set[str] = set() + for match in re.finditer(r"\[[^\]]+\]\(([^)\s]+)\)", sidebar): + target = match.group(1).split("#", 1)[0] + if target: + linked_targets.add(target) + + missing: list[str] = [] + for wiki_name in wiki_structure.values(): + if wiki_name not in linked_targets: + missing.append(wiki_name) + return sorted(missing) + + def generate_home(docs_dir: Path, wiki_structure: dict[str, str]) -> str: """Generate the Home.md landing page from index.md.""" index_path = docs_dir / "index.md" @@ -1096,6 +1153,7 @@ def generate_home(docs_dir: Path, wiki_structure: dict[str, str]) -> str: content = convert_links(content, "index.md", wiki_structure) content = convert_asset_paths(content, "index.md") content = strip_mkdocs_features(content) + content = ensure_wiki_sync_header(content, "index.md") return content # Fallback content if index.md doesn't exist @@ -1263,6 +1321,18 @@ def main() -> int: wiki_structure = discover_docs(docs_dir) logger.info(f" Found {len(wiki_structure)} files to sync") + # Validate sidebar coverage before writing any files to avoid partial syncs. + logger.info("\nValidating sidebar coverage...") + sidebar_content = generate_sidebar(wiki_structure) + missing_sidebar_pages = find_sidebar_missing_pages(sidebar_content, wiki_structure) + if missing_sidebar_pages: + logger.error( + "Sidebar is missing mapped pages: " + + ", ".join(missing_sidebar_pages) + + ". Update generate_sidebar() template." + ) + return 1 + # Clean wiki directory if args.clean: logger.info("\nCleaning wiki directory...") @@ -1302,7 +1372,6 @@ def main() -> int: # Generate sidebar logger.info("Generating sidebar...") - sidebar_content = generate_sidebar(wiki_structure) if not write_file_safe(wiki_dir / "_Sidebar.md", sidebar_content, args.dry_run): errors += 1 diff --git a/scripts/docs/verify-markdown-code.sh b/scripts/docs/verify-markdown-code.sh index 55312457..e19134d6 100755 --- a/scripts/docs/verify-markdown-code.sh +++ b/scripts/docs/verify-markdown-code.sh @@ -249,7 +249,7 @@ is_incomplete_snippet() { fi # Contains // ... comment placeholder - if echo "$code" | grep -qE '//\s*\.\.\.|//\s*\.\.\.'; then + if echo "$code" | grep -qE '//[[:space:]]*\.\.\.'; then echo "contains // ... comment" return 0 fi @@ -267,7 +267,7 @@ is_incomplete_snippet() { fi # Shell command prefixed with $ (sometimes in markdown) - if echo "$code" | grep -qE '^\$\s'; then + if echo "$code" | grep -qE '^\$[[:space:]]'; then echo "appears to be shell command" return 0 fi @@ -286,28 +286,28 @@ is_incomplete_snippet() { fi # Before/After style documentation - if echo "$code" | grep -qE '//\s*(Before|After)'; then + if echo "$code" | grep -qE '//[[:space:]]*(Before|After)'; then echo "before/after documentation example" return 0 fi # References undefined session variable (common in documentation) # Only skip if session is used but not defined with "let session" or "let mut session" - if echo "$code" | grep -qE '\bsession\b'; then - if ! echo "$code" | grep -qE 'let\s+(mut\s+)?session\s*[=:]'; then + if echo "$code" | grep -qE '(^|[^[:alnum:]_])session([^[:alnum:]_]|$)'; then + if ! echo "$code" | grep -qE 'let[[:space:]]+(mut[[:space:]]+)?session[[:space:]]*[=:]'; then echo "references undefined session variable (documentation example)" return 0 fi fi # References undefined game_state variable - if echo "$code" | grep -qE '\bgame_state\b' && ! echo "$code" | grep -qE 'let.*game_state'; then + if echo "$code" | grep -qE '(^|[^[:alnum:]_])game_state([^[:alnum:]_]|$)' && ! echo "$code" | grep -qE 'let.*game_state'; then echo "references undefined game_state variable (documentation example)" return 0 fi # References functions that are meant to be user-defined - if echo "$code" | grep -qE '\b(handle_event|handle_requests|get_local_input|apply_input|compute_checksum|render)\s*\('; then + if echo "$code" | grep -qE '(^|[^[:alnum:]_])(handle_event|handle_requests|get_local_input|apply_input|compute_checksum|render)[[:space:]]*\('; then echo "references user-defined functions (documentation example)" return 0 fi @@ -342,7 +342,7 @@ is_incomplete_snippet() { fi # Short snippets referencing config variables that need definition - if echo "$code" | grep -qE '\b(sparse_saving|first_incorrect|last_saved|check_distance)\b'; then + if echo "$code" | grep -qE '(^|[^[:alnum:]_])(sparse_saving|first_incorrect|last_saved|check_distance)([^[:alnum:]_]|$)'; then if ! echo "$code" | grep -qE 'let.*(sparse_saving|first_incorrect|last_saved|check_distance)'; then echo "references undefined config variable (documentation example)" return 0 @@ -350,7 +350,7 @@ is_incomplete_snippet() { fi # References to types that need to be defined (spectator, player, etc.) - if echo "$code" | grep -qE '\bspectator\b|\bplayer\b' && echo "$code" | grep -qE '\.(address|handle|socket)'; then + if echo "$code" | grep -qE '(^|[^[:alnum:]_])(spectator|player)([^[:alnum:]_]|$)' && echo "$code" | grep -qE '\.(address|handle|socket)'; then if ! echo "$code" | grep -qE 'let.*(spectator|player)'; then echo "references undefined spectator/player variables" return 0 diff --git a/scripts/hooks/check-changelog-unreleased.py b/scripts/hooks/check-changelog-unreleased.py new file mode 100644 index 00000000..4f092df2 --- /dev/null +++ b/scripts/hooks/check-changelog-unreleased.py @@ -0,0 +1,153 @@ +#!/usr/bin/env python3 +"""Enforce the unreleased code rule in CHANGELOG.md. + +The rule: Never add separate 'Fixed' or 'Changed' entries for code that +has not yet been released. Fixes to unreleased features should be folded +into the existing 'Added' entry describing that feature. The changelog +should describe the final shipped state, not intermediate development +history. + +Exception: '### Changed' entries that ALL start with '**Breaking:**' are +legitimate (they document new enum variants or API changes affecting +already-released types). + +Cross-platform: Works on Linux, macOS, and Windows. +""" +from __future__ import annotations + +import re +import sys +from pathlib import Path + + +def check_file(filepath: Path, repo_root: Path | None = None) -> bool: + """Check CHANGELOG.md for unreleased code rule violations. + + Returns True if the file passes (no violations found). + When repo_root is provided, paths in output are relative to it. + """ + if repo_root is not None: + try: + display_path = filepath.relative_to(repo_root) + except ValueError: + display_path = filepath + else: + display_path = filepath + + try: + content = filepath.read_text(encoding="utf-8") + except (OSError, UnicodeDecodeError) as e: + print(f"{display_path}:0: cannot read file: {e}", file=sys.stderr) + return False + + lines = content.splitlines() + + # Find the [Unreleased] section + unreleased_start = None + unreleased_end = None + for i, line in enumerate(lines): + if re.match(r"^## \[Unreleased\]", line): + unreleased_start = i + elif unreleased_start is not None and re.match(r"^## \[", line): + unreleased_end = i + break + + if unreleased_start is None: + # No [Unreleased] section -- nothing to check + return True + + if unreleased_end is None: + unreleased_end = len(lines) + + # Parse subsections within [Unreleased] + has_added = False + fixed_line = 0 + changed_line = 0 + has_fixed = False + has_non_breaking_changed = False + + # Track current subsection for entry analysis + current_subsection = None + changed_entries: list[str] = [] + + for i in range(unreleased_start + 1, unreleased_end): + line = lines[i] + subsection_match = re.match(r"^### (.+)$", line) + if subsection_match: + # Before switching subsections, evaluate the previous one + if current_subsection == "Changed": + # Check if ALL entries start with **Breaking:** + non_breaking = [ + e for e in changed_entries + if not e.lstrip("- ").startswith("**Breaking:**") + ] + if non_breaking: + has_non_breaking_changed = True + + subsection_name = subsection_match.group(1).strip() + current_subsection = subsection_name + changed_entries = [] + + if subsection_name == "Added": + has_added = True + elif subsection_name == "Fixed": + has_fixed = True + fixed_line = i + 1 + elif subsection_name == "Changed": + changed_line = i + 1 + elif current_subsection == "Changed" and line.strip().startswith("- "): + changed_entries.append(line.strip()) + + # Evaluate the last subsection if it was Changed + if current_subsection == "Changed": + non_breaking = [ + e for e in changed_entries + if not e.lstrip("- ").startswith("**Breaking:**") + ] + if non_breaking: + has_non_breaking_changed = True + + # Check for violations + violations_found = False + + if has_added and has_fixed: + print( + f"{display_path}:{fixed_line}: '### Fixed' subsection found " + f"alongside '### Added' in [Unreleased] -- fixes to unreleased " + f"features should be folded into the existing Added entry. " + f"The changelog should describe the final shipped state, not " + f"intermediate development history.", + file=sys.stderr, + ) + violations_found = True + + if has_added and has_non_breaking_changed: + print( + f"{display_path}:{changed_line}: '### Changed' subsection with " + f"non-Breaking entries found alongside '### Added' in " + f"[Unreleased] -- changes to unreleased features should be " + f"folded into the existing Added entry. Only **Breaking:** " + f"entries (for already-released types) belong in Changed.", + file=sys.stderr, + ) + violations_found = True + + return not violations_found + + +def main() -> int: + repo_root = Path(__file__).resolve().parent.parent.parent + changelog = repo_root / "CHANGELOG.md" + + if not changelog.is_file(): + # No CHANGELOG.md -- nothing to check + return 0 + + if not check_file(changelog, repo_root): + return 1 + + return 0 + + +if __name__ == "__main__": + sys.exit(main()) diff --git a/scripts/hooks/check-sync-headers.py b/scripts/hooks/check-sync-headers.py index e0ea4746..33beab9a 100644 --- a/scripts/hooks/check-sync-headers.py +++ b/scripts/hooks/check-sync-headers.py @@ -22,22 +22,45 @@ SYNC_RE = re.compile(r"^\s*$") TARGET_RE = re.compile(r"\b((?:docs|wiki)/[^\s>]+\.md)\b") +SYNC_REMEDIATION = "python scripts/docs/sync-wiki.py" -def _display_path(path: Path) -> str: +def _display_path(path: Path, repo_root: Path | None = None) -> str: """Convert path to a repo-relative path for diagnostics.""" + base = repo_root if repo_root is not None else Path.cwd() try: - return str(path.resolve().relative_to(Path.cwd().resolve())) + return str(path.resolve().relative_to(base.resolve())) except ValueError: return str(path) -def _extract_sync_target(path: Path) -> tuple[int, str] | None: +def _find_case_insensitive_match(path: Path, repo_root: Path | None = None) -> str | None: + """Return a repo-relative filename with different casing, if present.""" + parent = path.parent + if not parent.exists(): + return None + + expected_name = path.name.lower() + matches = [] + for candidate in parent.iterdir(): + if candidate.name.lower() != expected_name: + continue + if candidate.name == path.name: + continue + matches.append(candidate) + + if not matches: + return None + matches.sort(key=lambda p: p.name) + return _display_path(matches[0], repo_root) + + +def _extract_sync_target(path: Path, repo_root: Path | None = None) -> tuple[int, str] | None: """Extract sync target from the first few lines of a markdown file.""" try: lines = path.read_text(encoding="utf-8", errors="replace").splitlines() except OSError as exc: - raise OSError(f"{_display_path(path)}:0: cannot read file: {exc}") from exc + raise OSError(f"{_display_path(path, repo_root)}:0: cannot read file: {exc}") from exc for idx, line in enumerate(lines[:20], start=1): match = SYNC_RE.match(line.strip()) @@ -50,17 +73,17 @@ def _extract_sync_target(path: Path) -> tuple[int, str] | None: return None -def _load_wiki_structure(sync_script: Path) -> dict[str, str]: +def _load_wiki_structure(sync_script: Path, repo_root: Path | None = None) -> dict[str, str]: """Load WIKI_STRUCTURE mapping from scripts/docs/sync-wiki.py via AST.""" try: content = sync_script.read_text(encoding="utf-8", errors="replace") except OSError as exc: - raise OSError(f"{_display_path(sync_script)}:0: cannot read file: {exc}") from exc + raise OSError(f"{_display_path(sync_script, repo_root)}:0: cannot read file: {exc}") from exc try: tree = ast.parse(content, filename=str(sync_script)) except SyntaxError as exc: - raise SyntaxError(f"{_display_path(sync_script)}:0: cannot parse file: {exc}") from exc + raise SyntaxError(f"{_display_path(sync_script, repo_root)}:0: cannot parse file: {exc}") from exc for node in ast.walk(tree): if isinstance(node, ast.Assign): @@ -73,7 +96,7 @@ def _load_wiki_structure(sync_script: Path) -> dict[str, str]: mapping[str(key.value)] = str(value.value) return mapping - raise ValueError(f"{_display_path(sync_script)}:0: WIKI_STRUCTURE not found") + raise ValueError(f"{_display_path(sync_script, repo_root)}:0: WIKI_STRUCTURE not found") def _check_file(repo_root: Path, rel_path: Path) -> list[str]: @@ -85,12 +108,15 @@ def _check_file(repo_root: Path, rel_path: Path) -> list[str]: return issues abs_path = repo_root / rel_path - sync_data = _extract_sync_target(abs_path) + try: + sync_data = _extract_sync_target(abs_path, repo_root) + except OSError as exc: + return [str(exc)] if sync_data is None: return issues line_no, target = sync_data - display_path = _display_path(abs_path) + display_path = _display_path(abs_path, repo_root) if not target: issues.append( @@ -113,12 +139,24 @@ def _check_file(repo_root: Path, rel_path: Path) -> list[str]: target_abs = repo_root / target if not target_abs.exists(): - issues.append( - f"{display_path}:{line_no}: SYNC target does not exist: {target}" - ) + case_match = _find_case_insensitive_match(target_abs, repo_root) + if case_match is not None: + issues.append( + f"{display_path}:{line_no}: SYNC target does not exist: {target} " + f"(case mismatch; found {case_match})" + ) + else: + issues.append( + f"{display_path}:{line_no}: SYNC target does not exist: {target} " + f"(remediation: run {SYNC_REMEDIATION})" + ) return issues - target_sync_data = _extract_sync_target(target_abs) + try: + target_sync_data = _extract_sync_target(target_abs, repo_root) + except OSError as exc: + issues.append(str(exc)) + return issues if target_sync_data is None: issues.append( f"{display_path}:{line_no}: SYNC target missing reciprocal SYNC header: {target}" @@ -128,7 +166,7 @@ def _check_file(repo_root: Path, rel_path: Path) -> list[str]: target_line, target_target = target_sync_data if target_target != rel_path.as_posix(): issues.append( - f"{_display_path(target_abs)}:{target_line}: reciprocal SYNC header must reference {rel_path.as_posix()}" + f"{_display_path(target_abs, repo_root)}:{target_line}: reciprocal SYNC header must reference {rel_path.as_posix()}" ) return issues @@ -147,15 +185,31 @@ def _check_required_pair(repo_root: Path, docs_rel: str, wiki_name: str) -> list issues.append(f"docs/{docs_rel}:0: expected docs mirror file is missing") return issues if not wiki_path.exists(): - issues.append(f"wiki/{wiki_name}.md:0: expected wiki mirror file is missing") + case_match = _find_case_insensitive_match(wiki_path, repo_root) + if case_match is not None: + issues.append( + f"wiki/{wiki_name}.md:0: expected wiki mirror file is missing " + f"(case mismatch; found {case_match})" + ) + else: + issues.append( + f"wiki/{wiki_name}.md:0: expected wiki mirror file is missing " + f"(remediation: run {SYNC_REMEDIATION})" + ) return issues - docs_sync = _extract_sync_target(docs_path) - if docs_sync is None: + docs_read_ok = True + try: + docs_sync = _extract_sync_target(docs_path, repo_root) + except OSError as exc: + issues.append(str(exc)) + docs_sync = None + docs_read_ok = False + if docs_sync is None and docs_read_ok: issues.append( f"docs/{docs_rel}:1: missing SYNC header; expected target {docs_expected}" ) - else: + elif docs_sync is not None: docs_line, docs_target = docs_sync if not docs_target: issues.append( @@ -166,12 +220,18 @@ def _check_required_pair(repo_root: Path, docs_rel: str, wiki_name: str) -> list f"docs/{docs_rel}:{docs_line}: SYNC header must target {docs_expected}, got {docs_target}" ) - wiki_sync = _extract_sync_target(wiki_path) - if wiki_sync is None: + wiki_read_ok = True + try: + wiki_sync = _extract_sync_target(wiki_path, repo_root) + except OSError as exc: + issues.append(str(exc)) + wiki_sync = None + wiki_read_ok = False + if wiki_sync is None and wiki_read_ok: issues.append( f"wiki/{wiki_name}.md:1: missing SYNC header; expected target {wiki_expected}" ) - else: + elif wiki_sync is not None: wiki_line, wiki_target = wiki_sync if not wiki_target: issues.append( @@ -192,23 +252,32 @@ def main() -> int: sync_script = repo_root / "scripts/docs/sync-wiki.py" try: - wiki_structure = _load_wiki_structure(sync_script) + wiki_structure = _load_wiki_structure(sync_script, repo_root) except (OSError, SyntaxError, ValueError) as exc: print(str(exc), file=sys.stderr) return 1 + validated: set[str] = set() for docs_rel, wiki_name in wiki_structure.items(): issues.extend(_check_required_pair(repo_root, docs_rel, wiki_name)) + validated.add(f"docs/{docs_rel}") + validated.add(f"wiki/{wiki_name}.md") # Also validate any SYNC headers that exist outside the required pairs. for root in ("docs", "wiki"): for path in (repo_root / root).rglob("*.md"): - issues.extend(_check_file(repo_root, path.relative_to(repo_root))) + rel = path.relative_to(repo_root) + if rel.as_posix() not in validated: + issues.extend(_check_file(repo_root, rel)) if issues: print("SYNC header validation failed:", file=sys.stderr) for issue in issues: print(issue, file=sys.stderr) + print( + f"hint: regenerate wiki mirrors with `{SYNC_REMEDIATION}` and restage changes", + file=sys.stderr, + ) return 1 return 0 diff --git a/scripts/tests/test_check_shell_portability.py b/scripts/tests/test_check_shell_portability.py new file mode 100644 index 00000000..e9210131 --- /dev/null +++ b/scripts/tests/test_check_shell_portability.py @@ -0,0 +1,192 @@ +#!/usr/bin/env python3 +"""Tests for scripts/ci/check-shell-portability.sh.""" + +from __future__ import annotations + +import shutil +import subprocess +from pathlib import Path + +REPO_ROOT = Path(__file__).resolve().parents[2] +CHECKER_SOURCE = REPO_ROOT / "scripts" / "ci" / "check-shell-portability.sh" + + +def _setup_repo(tmp_path: Path) -> tuple[Path, Path]: + """Create a temporary repo containing the portability checker.""" + repo = tmp_path / "repo" + checker = repo / "scripts" / "ci" / "check-shell-portability.sh" + checker.parent.mkdir(parents=True, exist_ok=True) + shutil.copy2(CHECKER_SOURCE, checker) + return repo, checker + + +def _write_script(repo: Path, rel_path: str, content: str) -> Path: + """Write a shell script fixture into the temporary repo.""" + path = repo / rel_path + path.parent.mkdir(parents=True, exist_ok=True) + path.write_text(content, encoding="utf-8") + path.chmod(0o755) + return path + + +def _run_checker(checker: Path) -> subprocess.CompletedProcess[str]: + """Run the checker script and capture stdout/stderr.""" + return subprocess.run( + ["bash", str(checker)], + cwd=checker.parent.parent.parent, + capture_output=True, + text=True, + check=False, + ) + + +def test_detects_nonportable_word_boundary_escape(tmp_path: Path) -> None: + """grep -E with \\b is reported as non-portable.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "grep -qE '\\\\bEq\\\\b' sample.txt\n", + ) + + result = _run_checker(checker) + + combined = result.stdout + result.stderr + assert result.returncode == 1 + assert "Non-portable regex escape" in combined + assert "\\b" in combined + + +def test_detects_nonportable_whitespace_and_word_escapes(tmp_path: Path) -> None: + """grep -E with \\s or \\w is reported as non-portable.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "grep -qE '^\\s*pub\\s+\\w+$' sample.txt\n", + ) + + result = _run_checker(checker) + + combined = result.stdout + result.stderr + assert result.returncode == 1 + assert "Non-portable regex escape" in combined + + +def test_detects_nonportable_digit_escape(tmp_path: Path) -> None: + """grep -E with \\d is reported as non-portable.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "grep -qE '^\\d+$' sample.txt\n", + ) + + result = _run_checker(checker) + + combined = result.stdout + result.stderr + assert result.returncode == 1 + assert "Non-portable regex escape" in combined + + +def test_detects_nonportable_pattern_assignment(tmp_path: Path) -> None: + """Variable-assigned regex escapes are also reported.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "pattern='\\\\bEq\\\\b'\n" + "grep -qE \"$pattern\" sample.txt\n", + ) + + result = _run_checker(checker) + + combined = result.stdout + result.stderr + assert result.returncode == 1 + assert "pattern assignment" in combined + + +def test_detects_nonportable_pattern_append_assignment(tmp_path: Path) -> None: + """Append assignments with regex escapes are also reported.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "patterns+=('\\\\w+')\n" + "grep -qE \"${patterns[0]}\" sample.txt\n", + ) + + result = _run_checker(checker) + + combined = result.stdout + result.stderr + assert result.returncode == 1 + assert "pattern assignment" in combined + + +def test_detects_nonportable_pattern_array_assignment(tmp_path: Path) -> None: + """Array index assignments with regex escapes are reported.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "patterns[0]='\\\\d+'\n" + "grep -qE \"${patterns[0]}\" sample.txt\n", + ) + + result = _run_checker(checker) + + combined = result.stdout + result.stderr + assert result.returncode == 1 + assert "pattern assignment" in combined + + +def test_ignores_invalid_spaced_equals_pseudo_assignment(tmp_path: Path) -> None: + """Spacing around '=' is not a valid shell assignment and is ignored.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "pattern = '\\\\bEq\\\\b'\n", + ) + + result = _run_checker(checker) + + assert result.returncode == 0 + + +def test_allows_posix_classes_for_boundaries(tmp_path: Path) -> None: + """POSIX-safe ERE patterns are accepted.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "grep -qE '(^|[^[:alnum:]_])Eq([^[:alnum:]_]|$)' sample.txt\n" + "grep -qE '^[[:space:]]*pub[[:space:]]+[[:alnum:]_]+$' sample.txt\n", + ) + + result = _run_checker(checker) + + assert result.returncode == 0 + + +def test_ignores_echoed_examples(tmp_path: Path) -> None: + """Descriptive echo lines containing regex text are ignored.""" + repo, checker = _setup_repo(tmp_path) + _write_script( + repo, + "scripts/sample.sh", + "#!/bin/bash\n" + "echo \"Use grep -E '\\\\bword\\\\b' patterns with care\"\n", + ) + + result = _run_checker(checker) + + assert result.returncode == 0 diff --git a/scripts/tests/test_check_sync_headers.py b/scripts/tests/test_check_sync_headers.py new file mode 100644 index 00000000..6f0831ba --- /dev/null +++ b/scripts/tests/test_check_sync_headers.py @@ -0,0 +1,222 @@ +#!/usr/bin/env python3 +"""Unit tests for check-sync-headers.py hook.""" + +from __future__ import annotations + +import importlib.util +from pathlib import Path + +import pytest + +scripts_dir = Path(__file__).parent.parent +spec = importlib.util.spec_from_file_location( + "check_sync_headers", + scripts_dir / "hooks" / "check-sync-headers.py", +) +check_sync_headers = importlib.util.module_from_spec(spec) +spec.loader.exec_module(check_sync_headers) + +_check_file = check_sync_headers._check_file +_check_required_pair = check_sync_headers._check_required_pair +_extract_sync_target = check_sync_headers._extract_sync_target +main = check_sync_headers.main + + +def _write(path: Path, content: str) -> None: + """Write UTF-8 text after creating parent directories.""" + path.parent.mkdir(parents=True, exist_ok=True) + path.write_text(content, encoding="utf-8") + + +def _assert_no_absolute_paths(issues: list[str]) -> None: + """Verify no diagnostic messages contain absolute filesystem paths. + + Diagnostics follow the format ``path:line: message``. + The path portion must never start with ``/`` or a Windows drive letter. + """ + for issue in issues: + assert not issue.startswith("/"), ( + f"diagnostic contains absolute path: {issue}" + ) + # Windows drive letter check (e.g., "C:\...") + if len(issue) >= 3 and issue[1] == ":" and issue[0].isalpha() and issue[2] in ("\\/"): + raise AssertionError(f"diagnostic contains absolute path: {issue}") + + +def _write_sync_script(repo_root: Path, mapping: dict[str, str]) -> None: + """Create a minimal sync-wiki.py with an AST-parsable WIKI_STRUCTURE.""" + entries = "\n".join( + f' "{docs_rel}": "{wiki_name}",' for docs_rel, wiki_name in mapping.items() + ) + _write( + repo_root / "scripts" / "docs" / "sync-wiki.py", + f"WIKI_STRUCTURE = {{\n{entries}\n}}\n", + ) + + +class TestRequiredPairDiagnostics: + """Tests for required docs/wiki pair diagnostics.""" + + @pytest.mark.parametrize( + ("wiki_filename", "expected_fragment"), + [ + ( + "Replay.md", + "remediation: run python scripts/docs/sync-wiki.py", + ), + ( + "replay.md", + "case mismatch; found wiki/replay.md", + ), + ], + ids=["missing-wiki", "case-mismatch"], + ) + def test_missing_wiki_mirror_reports_actionable_message( + self, + tmp_path: Path, + wiki_filename: str, + expected_fragment: str, + ) -> None: + """Missing required wiki files include remediation and case mismatch hints.""" + _write( + tmp_path / "docs" / "replay.md", + "\n", + ) + + if wiki_filename != "Replay.md": + _write( + tmp_path / "wiki" / wiki_filename, + "\n", + ) + + issues = _check_required_pair(tmp_path, "replay.md", "Replay") + + assert len(issues) == 1 + assert expected_fragment in issues[0] + _assert_no_absolute_paths(issues) + + +class TestCheckFileDiagnostics: + """Tests for free-form SYNC header diagnostics.""" + + def test_missing_target_reports_case_mismatch(self, tmp_path: Path) -> None: + """A missing target points to likely casing errors when possible.""" + _write( + tmp_path / "docs" / "replay.md", + "\n", + ) + _write( + tmp_path / "wiki" / "replay.md", + "\n", + ) + + issues = _check_file(tmp_path, Path("docs/replay.md")) + + assert len(issues) == 1 + assert "case mismatch; found wiki/replay.md" in issues[0] + _assert_no_absolute_paths(issues) + + +class TestPathDiagnostics: + """Ensure all diagnostic paths are repo-relative, never absolute.""" + + def test_check_file_case_mismatch_uses_relative_paths(self, tmp_path: Path) -> None: + """_check_file diagnostics use relative paths even without chdir.""" + _write(tmp_path / "docs" / "replay.md", + "\n") + _write(tmp_path / "wiki" / "replay.md", # wrong case + "\n") + + issues = _check_file(tmp_path, Path("docs/replay.md")) + assert len(issues) == 1 + _assert_no_absolute_paths(issues) + + def test_check_required_pair_case_mismatch_uses_relative_paths(self, tmp_path: Path) -> None: + """_check_required_pair diagnostics use relative paths even without chdir.""" + _write(tmp_path / "docs" / "replay.md", + "\n") + _write(tmp_path / "wiki" / "replay.md", # wrong case + "\n") + + issues = _check_required_pair(tmp_path, "replay.md", "Replay") + assert len(issues) == 1 + _assert_no_absolute_paths(issues) + + def test_check_file_missing_target_uses_relative_paths(self, tmp_path: Path) -> None: + """Missing target diagnostics use relative paths.""" + _write(tmp_path / "docs" / "replay.md", + "\n") + # Don't create wiki/Replay.md at all + + issues = _check_file(tmp_path, Path("docs/replay.md")) + assert len(issues) == 1 + _assert_no_absolute_paths(issues) + + def test_check_file_reciprocal_mismatch_uses_relative_paths(self, tmp_path: Path) -> None: + """Reciprocal mismatch diagnostics use relative paths.""" + _write(tmp_path / "docs" / "replay.md", + "\n") + _write(tmp_path / "wiki" / "Replay.md", + "\n") + + issues = _check_file(tmp_path, Path("docs/replay.md")) + assert len(issues) == 1 + _assert_no_absolute_paths(issues) + + def test_extract_sync_target_error_uses_relative_path(self, tmp_path: Path) -> None: + """_extract_sync_target OSError diagnostics use relative paths.""" + missing = tmp_path / "docs" / "nonexistent.md" + with pytest.raises(OSError) as exc_info: + _extract_sync_target(missing, repo_root=tmp_path) + msg = str(exc_info.value) + assert not msg.startswith("/"), f"error contains absolute path: {msg}" + + def test_check_file_unreadable_returns_issue_not_exception(self, tmp_path: Path) -> None: + """_check_file gracefully handles unreadable files as diagnostics.""" + _write(tmp_path / "docs" / "replay.md", + "\n") + # Create the target as a directory so reading it as a file raises OSError. + (tmp_path / "wiki" / "Replay.md").mkdir(parents=True) + + issues = _check_file(tmp_path, Path("docs/replay.md")) + # The target exists (as a directory) but _extract_sync_target raises OSError. + assert len(issues) >= 1 + _assert_no_absolute_paths(issues) + + +class TestMain: + """Integration tests for the hook entrypoint.""" + + def test_main_emits_remediation_hint(self, tmp_path: Path, capsys: pytest.CaptureFixture[str], monkeypatch: pytest.MonkeyPatch) -> None: + """When validation fails, the hook prints a global remediation hint.""" + _write_sync_script(tmp_path, {"replay.md": "Replay"}) + _write( + tmp_path / "docs" / "replay.md", + "\n", + ) + + monkeypatch.chdir(tmp_path) + exit_code = main() + + captured = capsys.readouterr() + assert exit_code == 1 + assert "hint: regenerate wiki mirrors with `python scripts/docs/sync-wiki.py`" in captured.err + + def test_main_passes_for_valid_reciprocal_pair( + self, + tmp_path: Path, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + """The hook succeeds when required docs/wiki pairs are reciprocal.""" + _write_sync_script(tmp_path, {"replay.md": "Replay"}) + _write( + tmp_path / "docs" / "replay.md", + "\n", + ) + _write( + tmp_path / "wiki" / "Replay.md", + "\n", + ) + + monkeypatch.chdir(tmp_path) + assert main() == 0 diff --git a/scripts/tests/test_ci_regex_portability.py b/scripts/tests/test_ci_regex_portability.py new file mode 100644 index 00000000..5b7bba82 --- /dev/null +++ b/scripts/tests/test_ci_regex_portability.py @@ -0,0 +1,112 @@ +#!/usr/bin/env python3 +"""Regression tests for CI scripts affected by ERE portability issues.""" + +from __future__ import annotations + +import shutil +import subprocess +from pathlib import Path + +REPO_ROOT = Path(__file__).resolve().parents[2] +DOC_CLAIMS_SOURCE = REPO_ROOT / "scripts" / "ci" / "check-doc-claims.sh" +DERIVE_BOUNDS_SOURCE = REPO_ROOT / "scripts" / "ci" / "check-derive-bounds.sh" + + +def _setup_repo_with_script(tmp_path: Path, script_source: Path) -> tuple[Path, Path]: + """Create a temporary repo with one CI script copied into scripts/ci/.""" + repo = tmp_path / "repo" + script_path = repo / "scripts" / "ci" / script_source.name + script_path.parent.mkdir(parents=True, exist_ok=True) + shutil.copy2(script_source, script_path) + return repo, script_path + + +def _write_rust(repo: Path, rel_path: str, content: str) -> Path: + """Write a Rust source fixture into the temporary repo.""" + path = repo / rel_path + path.parent.mkdir(parents=True, exist_ok=True) + path.write_text(content, encoding="utf-8") + return path + + +def _run_script(script_path: Path) -> subprocess.CompletedProcess[str]: + """Run a copied CI shell script in its temporary repo.""" + return subprocess.run( + ["bash", str(script_path)], + cwd=script_path.parent.parent.parent, + capture_output=True, + text=True, + check=False, + ) + + +def test_doc_claims_accepts_downcast_method_syntax(tmp_path: Path) -> None: + """Doc claims with .downcast::() are recognized as backed by infra.""" + repo, script_path = _setup_repo_with_script(tmp_path, DOC_CLAIMS_SOURCE) + _write_rust( + repo, + "src/downcast_ok.rs", + "/// This helper supports downcast-based extraction.\n" + "pub fn extract(value: Box) {\n" + " let _ = value.downcast::();\n" + "}\n", + ) + + result = _run_script(script_path) + + combined = result.stdout + result.stderr + assert result.returncode == 0, combined + + +def test_doc_claims_flags_unbacked_downcast_docs(tmp_path: Path) -> None: + """Doc claims mentioning downcast without infrastructure are rejected.""" + repo, script_path = _setup_repo_with_script(tmp_path, DOC_CLAIMS_SOURCE) + _write_rust( + repo, + "src/downcast_missing.rs", + "/// This type allows callers to downcast to concrete types.\n" + "pub struct NoDowncastInfra;\n", + ) + + result = _run_script(script_path) + + combined = result.stdout + result.stderr + assert result.returncode == 1 + assert 'Doc comments mention "downcast"' in combined + + +def test_derive_bounds_flags_eq_without_eq_bound(tmp_path: Path) -> None: + """Public generic derives Eq without Eq bound should fail.""" + repo, script_path = _setup_repo_with_script(tmp_path, DERIVE_BOUNDS_SOURCE) + _write_rust( + repo, + "src/derive_bad.rs", + "#[derive(Clone, Debug, PartialEq, Eq)]\n" + "pub struct Wrapper {\n" + " value: T,\n" + "}\n", + ) + + result = _run_script(script_path) + + combined = result.stdout + result.stderr + assert result.returncode == 1 + assert "derives Eq" in combined + + +def test_derive_bounds_accepts_explicit_eq_bound(tmp_path: Path) -> None: + """Public generic derives Eq with explicit Eq bound should pass.""" + repo, script_path = _setup_repo_with_script(tmp_path, DERIVE_BOUNDS_SOURCE) + _write_rust( + repo, + "src/derive_ok.rs", + "#[derive(Clone, Debug, PartialEq, Eq)]\n" + "pub struct Wrapper {\n" + " value: T,\n" + "}\n", + ) + + result = _run_script(script_path) + + combined = result.stdout + result.stderr + assert result.returncode == 0, combined diff --git a/scripts/tests/test_sync_wiki.py b/scripts/tests/test_sync_wiki.py index 1db0bf5f..ba28d419 100644 --- a/scripts/tests/test_sync_wiki.py +++ b/scripts/tests/test_sync_wiki.py @@ -28,9 +28,16 @@ convert_grid_cards_to_list = sync_wiki.convert_grid_cards_to_list convert_links = sync_wiki.convert_links dedent_mkdocs_tabs = sync_wiki.dedent_mkdocs_tabs +ensure_wiki_sync_header = sync_wiki.ensure_wiki_sync_header find_code_fence_ranges = sync_wiki.find_code_fence_ranges find_inline_code_ranges = sync_wiki.find_inline_code_ranges +find_sidebar_missing_pages = sync_wiki.find_sidebar_missing_pages +generate_home = sync_wiki.generate_home +generate_sidebar = sync_wiki.generate_sidebar +main = sync_wiki.main path_to_wiki_name = sync_wiki.path_to_wiki_name +process_file = sync_wiki.process_file +write_file_safe = sync_wiki.write_file_safe strip_mkdocs_attributes = sync_wiki.strip_mkdocs_attributes strip_mkdocs_features = sync_wiki.strip_mkdocs_features strip_mkdocs_icons = sync_wiki.strip_mkdocs_icons @@ -938,5 +945,270 @@ def test_unknown_page_auto_generates_wiki_name(self) -> None: assert result == "[Unknown](Some-Unknown-Page)" +class TestSyncHeaders: + """Tests for reciprocal SYNC header generation in wiki output.""" + + @pytest.mark.parametrize( + ("input_content", "docs_rel", "expected_header"), + [ + ( + "\n\n# Replay\n", + "replay.md", + "", + ), + ( + "# No sync header yet\n", + "specs/formal-spec.md", + "", + ), + ], + ids=["replace-existing-sync", "prepend-missing-sync"], + ) + def test_ensure_wiki_sync_header_generates_reciprocal_target( + self, + input_content: str, + docs_rel: str, + expected_header: str, + ) -> None: + """Wiki content always points SYNC metadata back to docs source files.""" + result = ensure_wiki_sync_header(input_content, docs_rel) + first_line = result.splitlines()[0] + + assert first_line == expected_header + + def test_process_file_writes_reciprocal_sync_header(self, tmp_path: Path) -> None: + """process_file rewrites docs SYNC headers for wiki output.""" + docs_dir = tmp_path / "docs" + wiki_dir = tmp_path / "wiki" + src_path = docs_dir / "replay.md" + src_path.parent.mkdir(parents=True, exist_ok=True) + src_path.write_text( + "\n\n# Replay\n", + encoding="utf-8", + ) + + wiki_structure = {"replay.md": "Replay"} + assert process_file(src_path, "Replay", docs_dir, wiki_dir, wiki_structure) + + output = (wiki_dir / "Replay.md").read_text(encoding="utf-8") + first_line = output.splitlines()[0] + assert first_line == ( + "" + ) + + def test_generate_home_writes_reciprocal_sync_header(self, tmp_path: Path) -> None: + """generate_home uses docs/index.md as the SYNC source target.""" + docs_dir = tmp_path / "docs" + docs_dir.mkdir(parents=True, exist_ok=True) + (docs_dir / "index.md").write_text( + "\n\n# Home\n", + encoding="utf-8", + ) + + output = generate_home(docs_dir, {"index.md": "Home"}) + assert output.splitlines()[0] == ( + "" + ) + + def test_ensure_wiki_sync_header_replaces_header_beyond_first_lines(self) -> None: + """Existing SYNC headers are moved to line 1 even when they appear later.""" + prelude = "\n".join(f"line {idx}" for idx in range(30)) + content = ( + f"{prelude}\n" + "\n" + "\n" + "# Replay\n" + ) + + result = ensure_wiki_sync_header(content, "replay.md") + + expected_header = ( + "" + ) + assert result.count("\n" + "\n" + "\n" + "\n" + "# Replay\n" + ) + + result = ensure_wiki_sync_header(content, "replay.md") + + expected_header = ( + "" + ) + assert result.count("\n\n# Replay\n\nExample body\n", + "# Replay\n\nExample body ", + ], + ids=[ + "source-missing-eof-newline", + "source-with-eof-newline", + "source-with-existing-sync-header", + "source-with-trailing-whitespace", + ], + ) + def test_process_file_normalizes_and_is_idempotent( + self, + tmp_path: Path, + source_content: str, + ) -> None: + """process_file should produce stable output across repeated runs.""" + docs_dir = tmp_path / "docs" + wiki_dir = tmp_path / "wiki" + src_path = docs_dir / "replay.md" + src_path.parent.mkdir(parents=True, exist_ok=True) + src_path.write_text(source_content, encoding="utf-8") + + wiki_structure = {"replay.md": "Replay"} + + assert process_file(src_path, "Replay", docs_dir, wiki_dir, wiki_structure) + first_output = (wiki_dir / "Replay.md").read_bytes() + + assert first_output.endswith(b"\n") + assert not first_output.endswith(b"\n\n") + + assert process_file(src_path, "Replay", docs_dir, wiki_dir, wiki_structure) + second_output = (wiki_dir / "Replay.md").read_bytes() + + assert first_output == second_output + + +class TestMainBehavior: + """Integration tests for sync-wiki main() control flow.""" + + def test_main_fails_before_writing_on_sidebar_coverage_error( + self, + tmp_path: Path, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + """Sidebar coverage failures should abort before mutating wiki output.""" + docs_dir = tmp_path / "docs" + wiki_dir = tmp_path / "wiki" + docs_dir.mkdir(parents=True, exist_ok=True) + wiki_dir.mkdir(parents=True, exist_ok=True) + (docs_dir / "index.md").write_text("# Home\n", encoding="utf-8") + + sentinel = "UNCHANGED\n" + (wiki_dir / "_Sidebar.md").write_text(sentinel, encoding="utf-8") + + monkeypatch.chdir(tmp_path) + monkeypatch.setattr(sys, "argv", ["sync-wiki.py"]) + monkeypatch.setattr( + sync_wiki, + "generate_sidebar", + lambda _wiki_structure: "# Fortress Rollback\n\n- [Home](Home)\n", + ) + + assert main() == 1 + assert (wiki_dir / "_Sidebar.md").read_text(encoding="utf-8") == sentinel + assert not (wiki_dir / "Home.md").exists() + + if __name__ == "__main__": pytest.main([__file__, "-v"]) diff --git a/scripts/verification/check-kani-coverage.sh b/scripts/verification/check-kani-coverage.sh index 2e5fb1f4..1c011a59 100755 --- a/scripts/verification/check-kani-coverage.sh +++ b/scripts/verification/check-kani-coverage.sh @@ -67,7 +67,7 @@ get_source_proofs() { # Extract proof names from verify-kani.sh tier arrays get_tiered_proofs() { # Extract all proof names from TIER1_PROOFS, TIER2_PROOFS, and TIER3_PROOFS arrays - grep -E '^\s*"proof_[^"]+"\s*$' "$VERIFY_KANI_SCRIPT" \ + grep -E '^[[:space:]]*"proof_[^"]+"[[:space:]]*$' "$VERIFY_KANI_SCRIPT" \ | sed 's/.*"\(proof_[^"]*\)".*/\1/' \ | sort \ | uniq diff --git a/src/error.rs b/src/error.rs index 76e8b3a4..f566e636 100644 --- a/src/error.rs +++ b/src/error.rs @@ -164,6 +164,18 @@ pub enum InvalidFrameReason { /// [`LoadGameState`]: crate::FortressRequest::LoadGameState /// [`SaveGameState`]: crate::FortressRequest::SaveGameState MissingState, + /// Replay has no more frames to play back. + /// + /// Returned by [`ReplaySession::advance_frame()`] when the replay data + /// has been fully consumed. Check [`ReplaySession::is_complete()`] before + /// calling `advance_frame()` to avoid this error. + /// + /// [`ReplaySession::advance_frame()`]: crate::ReplaySession::advance_frame + /// [`ReplaySession::is_complete()`]: crate::ReplaySession::is_complete + ReplayExhausted { + /// The last frame available in the replay. + last_frame: Frame, + }, /// Custom reason (fallback for API compatibility). Custom(&'static str), } @@ -203,6 +215,9 @@ impl Display for InvalidFrameReason { }, Self::NullOrNegative => write!(f, "frame is NULL or negative"), Self::MissingState => write!(f, "no saved state exists for this frame"), + Self::ReplayExhausted { last_frame } => { + write!(f, "replay exhausted (last frame: {})", last_frame) + }, Self::Custom(s) => write!(f, "{}", s), } } @@ -230,7 +245,8 @@ pub enum RleDecodeReason { /// An unknown or unexpected error occurred during RLE decoding. /// /// This variant is used as a fallback when the underlying error cannot be - /// mapped to a more specific reason (e.g., when downcasting fails). + /// mapped to a more specific reason (e.g., when a `FortressError` variant + /// does not match the expected structured RLE error kind). Unknown, } @@ -291,8 +307,8 @@ pub enum DeltaDecodeReason { }, /// An unknown or unexpected error occurred during delta decoding. /// - /// This variant is used as a fallback when the underlying error cannot be - /// mapped to a more specific reason (e.g., when downcasting fails). + /// This variant exists for forward compatibility and as a fallback + /// if future error-mapping code encounters an unexpected error kind. Unknown, } @@ -1420,6 +1436,19 @@ mod tests { ); } + #[test] + fn invalid_frame_reason_replay_exhausted_display() { + let reason = InvalidFrameReason::ReplayExhausted { + last_frame: Frame::new(99), + }; + let display = format!("{reason}"); + assert!( + display.contains("replay exhausted"), + "Expected 'replay exhausted' in: {display}" + ); + assert!(display.contains("99"), "Expected '99' in: {display}"); + } + #[test] fn test_internal_error_kind_index_out_of_bounds() { let kind = InternalErrorKind::IndexOutOfBounds(IndexOutOfBounds { diff --git a/src/frame_info.rs b/src/frame_info.rs index d16e10d6..6e2b072d 100644 --- a/src/frame_info.rs +++ b/src/frame_info.rs @@ -38,7 +38,7 @@ impl Default for GameState { #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub struct PlayerInput where - I: Copy + Clone + PartialEq, + I: Copy + Clone + PartialEq + Eq, { /// The frame to which this info belongs to. [`Frame::NULL`] represents an invalid frame pub frame: Frame, @@ -46,7 +46,7 @@ where pub input: I, } -impl PlayerInput { +impl PlayerInput { /// Creates a new `PlayerInput` with the given frame and input. #[inline] #[must_use] @@ -224,7 +224,7 @@ mod player_input_tests { use super::*; #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Debug)] struct TestInput { inp: u8, } @@ -386,7 +386,7 @@ mod player_input_tests { // ========================================== #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Debug)] struct ComplexInput { x: i16, y: i16, diff --git a/src/input_queue/mod.rs b/src/input_queue/mod.rs index a60ecb4e..68e46a56 100644 --- a/src/input_queue/mod.rs +++ b/src/input_queue/mod.rs @@ -800,7 +800,7 @@ mod input_queue_tests { use super::*; #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { inp: u8, } @@ -1741,7 +1741,7 @@ mod property_tests { use std::net::SocketAddr; #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { inp: u8, } @@ -2081,7 +2081,7 @@ mod kani_input_queue_proofs { use std::net::SocketAddr; #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct TestInput { inp: u8, } diff --git a/src/input_queue/prediction.rs b/src/input_queue/prediction.rs index 27d3dc03..118891d9 100644 --- a/src/input_queue/prediction.rs +++ b/src/input_queue/prediction.rs @@ -133,7 +133,7 @@ mod tests { use serde::{Deserialize, Serialize}; #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { inp: u8, } diff --git a/src/lib.rs b/src/lib.rs index e9c2b391..7ad596dc 100644 --- a/src/lib.rs +++ b/src/lib.rs @@ -77,6 +77,7 @@ pub use network::chaos_socket::{ChaosConfig, ChaosConfigBuilder, ChaosSocket, Ch pub use network::messages::Message; pub use network::network_stats::NetworkStats; pub use network::udp_socket::UdpNonBlockingSocket; +pub use replay::{Replay, ReplayMetadata}; use serde::{de::DeserializeOwned, Serialize}; pub use sessions::builder::SessionBuilder; pub use sessions::config::{ @@ -86,6 +87,7 @@ pub use sessions::event_drain::EventDrain; pub use sessions::p2p_session::P2PSession; pub use sessions::p2p_spectator_session::SpectatorSession; pub use sessions::player_registry::PlayerRegistry; +pub use sessions::replay_session::ReplaySession; pub use sessions::session_trait::Session; pub use sessions::sync_health::SyncHealth; pub use sessions::sync_test_session::SyncTestSession; @@ -184,6 +186,8 @@ pub mod frame_info; pub mod hash; #[doc(hidden)] pub mod input_queue; +/// Replay recording and playback for deterministic match replays. +pub mod replay; /// Internal run-length encoding module for network compression. /// /// Provides RLE encoding/decoding that replaces the `bitfield-rle` crate dependency. @@ -222,6 +226,8 @@ pub mod sessions { pub mod p2p_spectator_session; #[doc(hidden)] pub mod player_registry; + /// Replay playback session for deterministic match replay. + pub mod replay_session; #[doc(hidden)] pub mod session_trait; #[doc(hidden)] @@ -1561,6 +1567,17 @@ where /// Milliseconds elapsed since synchronization started. elapsed_ms: u128, }, + /// A replay checksum mismatch was detected during validation playback. + /// This indicates a non-determinism bug: the same inputs produced different + /// game states between the original recording and the current replay. + ReplayDesync { + /// The frame where the checksum mismatch was detected. + frame: Frame, + /// The checksum from the original recording. + expected_checksum: u128, + /// The checksum computed during replay playback. + actual_checksum: u128, + }, } impl std::fmt::Display for FortressEvent @@ -1610,6 +1627,17 @@ where Self::SyncTimeout { addr, elapsed_ms } => { write!(f, "SyncTimeout(addr={}, elapsed={}ms)", addr, elapsed_ms) }, + Self::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } => write!( + f, + "ReplayDesync(frame={}, expected={:#x}, actual={:#x})", + frame.as_i32(), + expected_checksum, + actual_checksum + ), } } } @@ -1730,7 +1758,7 @@ impl std::fmt::Display for FortressRequest { /// # use serde::{Deserialize, Serialize}; /// # use std::net::SocketAddr; /// # -/// # #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +/// # #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] /// # struct MyInput(u8); /// # /// # #[derive(Clone, Default)] @@ -1790,7 +1818,7 @@ impl std::fmt::Display for FortressRequest { /// # use serde::{Deserialize, Serialize}; /// # use std::net::SocketAddr; /// # -/// # #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +/// # #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] /// # struct MyInput(u8); /// # #[derive(Clone, Default)] /// # struct MyState { frame: i32 } @@ -1863,6 +1891,13 @@ macro_rules! handle_requests { /// This trait bundles the generic types needed for a session. Implement this on /// a marker struct to configure your session types. /// +/// # Thread Safety +/// +/// When the `sync-send` feature is enabled, `Config` requires `Send + Sync` +/// and its associated types gain additional `Send + Sync` bounds. Without the +/// feature, these bounds are absent, allowing single-threaded or `!Send` usage +/// (e.g., WASM). +/// /// # Example /// /// ``` @@ -1871,7 +1906,7 @@ macro_rules! handle_requests { /// use std::net::SocketAddr; /// /// // Your game's input type -/// #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +/// #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] /// struct GameInput { /// buttons: u8, /// stick_x: i8, @@ -1907,7 +1942,7 @@ pub trait Config: 'static + Send + Sync { /// /// The implementation of [Default] is used for representing "no input" for /// a player, including when a player is disconnected. - type Input: Copy + Clone + PartialEq + Default + Serialize + DeserializeOwned + Send + Sync; + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned + Send + Sync; /// The save state type for the session. type State: Clone + Send + Sync; @@ -1941,7 +1976,7 @@ pub trait Config: 'static { /// /// The implementation of [Default] is used for representing "no input" for /// a player, including when a player is disconnected. - type Input: Copy + Clone + PartialEq + Default + Serialize + DeserializeOwned; + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned; /// The save state type for the session. type State; @@ -2341,6 +2376,41 @@ mod tests { assert!(display.contains("elapsed=10000ms")); } + #[test] + fn fortress_event_display_replay_desync() { + let event: FortressEvent = FortressEvent::ReplayDesync { + frame: Frame::new(42), + expected_checksum: 0xAAAA, + actual_checksum: 0xBBBB, + }; + let display = event.to_string(); + assert!(display.starts_with("ReplayDesync(")); + assert!(display.contains("frame=42")); + assert!(display.contains("expected=0xaaaa")); + assert!(display.contains("actual=0xbbbb")); + } + + #[test] + fn fortress_event_replay_desync_fields() { + let event: FortressEvent = FortressEvent::ReplayDesync { + frame: Frame::new(100), + expected_checksum: 0x1234_5678, + actual_checksum: 0xDEAD_BEEF, + }; + if let FortressEvent::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } = event + { + assert_eq!(frame, Frame::new(100)); + assert_eq!(expected_checksum, 0x1234_5678); + assert_eq!(actual_checksum, 0xDEAD_BEEF); + } else { + panic!("Expected ReplayDesync"); + } + } + // ========================================== // SessionState Display Tests // ========================================== diff --git a/src/network/protocol/event.rs b/src/network/protocol/event.rs index 687a834e..a802b490 100644 --- a/src/network/protocol/event.rs +++ b/src/network/protocol/event.rs @@ -14,7 +14,7 @@ use crate::{Config, PlayerHandle}; /// /// This type is re-exported in [`__internal`](crate::__internal) for testing and fuzzing. /// It is not part of the stable public API. -#[derive(Debug, Clone, PartialEq)] +#[derive(Debug, Clone, PartialEq, Eq)] pub enum Event where T: Config, @@ -99,7 +99,7 @@ mod tests { use std::net::SocketAddr; /// A minimal test config for testing Event. - #[derive(Debug, Clone, Copy, PartialEq, Default, serde::Serialize, serde::Deserialize)] + #[derive(Debug, Clone, Copy, PartialEq, Eq, Default, serde::Serialize, serde::Deserialize)] struct TestInput(u32); #[derive(Debug, Clone, Default)] @@ -557,7 +557,7 @@ mod kani_proofs { /// Minimal test configuration for Kani proofs. #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { value: u8, } diff --git a/src/network/protocol/input_bytes.rs b/src/network/protocol/input_bytes.rs index 47cd273e..5fddef0e 100644 --- a/src/network/protocol/input_bytes.rs +++ b/src/network/protocol/input_bytes.rs @@ -182,7 +182,7 @@ mod tests { // Test configuration #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { inp: u32, } @@ -502,7 +502,7 @@ mod tests { // ========================================== #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct ComplexInput { x: i32, y: i32, diff --git a/src/network/protocol/mod.rs b/src/network/protocol/mod.rs index 9c3bacdc..6bf8f11d 100644 --- a/src/network/protocol/mod.rs +++ b/src/network/protocol/mod.rs @@ -981,7 +981,7 @@ mod tests { // Test configuration #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { inp: u32, } @@ -2702,7 +2702,7 @@ mod property_tests { // ======================================================================== #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { inp: u32, } diff --git a/src/prelude.rs b/src/prelude.rs index c7045964..bfa4bcf7 100644 --- a/src/prelude.rs +++ b/src/prelude.rs @@ -13,7 +13,7 @@ //! //! The prelude includes: //! -//! - **Session types**: [`P2PSession`], [`SpectatorSession`], [`SyncTestSession`], [`SessionBuilder`] +//! - **Session types**: [`P2PSession`], [`SpectatorSession`], [`SyncTestSession`], [`ReplaySession`], [`SessionBuilder`] //! - **Core traits**: [`Config`], [`NonBlockingSocket`], [`Session`] //! - **Socket implementations**: [`UdpNonBlockingSocket`] //! - **Fundamental types**: [`Frame`], [`PlayerHandle`], [`PlayerType`], [`NULL_FRAME`] @@ -33,7 +33,7 @@ //! use std::net::SocketAddr; //! //! // Define your input type -//! #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +//! #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] //! struct MyInput { //! buttons: u8, //! } @@ -59,8 +59,12 @@ pub use crate::sessions::builder::SessionBuilder; pub use crate::sessions::p2p_session::P2PSession; pub use crate::sessions::p2p_spectator_session::SpectatorSession; +pub use crate::sessions::replay_session::ReplaySession; pub use crate::sessions::sync_test_session::SyncTestSession; +// Replay types +pub use crate::replay::{Replay, ReplayMetadata}; + // Core traits pub use crate::{Config, NonBlockingSocket, Session}; @@ -97,5 +101,10 @@ pub use crate::RequestVec; // Network monitoring pub use crate::NetworkStats; +// Replay types are re-exported above with session types + +// Session telemetry +pub use crate::telemetry::{CollectingTelemetry, SessionTelemetry, TelemetryEvent}; + // Common configuration types pub use crate::sessions::config::{ProtocolConfig, SyncConfig}; diff --git a/src/replay.rs b/src/replay.rs new file mode 100644 index 00000000..6d7093a1 --- /dev/null +++ b/src/replay.rs @@ -0,0 +1,752 @@ +//! Replay recording and playback for deterministic match replays. +//! +//! This module provides types for recording game inputs during a P2P session +//! and playing them back deterministically. A [`Replay`] captures all confirmed +//! inputs per frame, enabling exact reproduction of a match. +//! +//! # Recording +//! +//! Enable recording on a [`SessionBuilder`] with [`with_recording`], then +//! extract the replay after the session ends with [`P2PSession::into_replay`]. +//! +//! # Playback +//! +//! Create a [`ReplaySession`] from a [`Replay`] to play back the recorded +//! inputs frame by frame. +//! +//! # Serialization +//! +//! Replays can be serialized to and from bytes using [`Replay::to_bytes`] and +//! [`Replay::from_bytes`], which use the same deterministic bincode codec as +//! network messages. +//! +//! # Example +//! +//! ``` +//! use fortress_rollback::replay::{Replay, ReplayMetadata}; +//! use serde::{Deserialize, Serialize}; +//! +//! // Replays are parameterized on your Config's Input type +//! #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] +//! struct MyInput { buttons: u8 } +//! +//! let replay = Replay:: { +//! num_players: 2, +//! frames: vec![vec![MyInput { buttons: 0 }; 2]; 10], +//! checksums: vec![None; 10], +//! metadata: ReplayMetadata { +//! library_version: env!("CARGO_PKG_VERSION").to_string(), +//! num_players: 2, +//! total_frames: 10, +//! skipped_frames: 0, +//! }, +//! }; +//! +//! // Serialize roundtrip +//! let bytes = replay.to_bytes()?; +//! let restored = Replay::::from_bytes(&bytes)?; +//! assert_eq!(restored.num_players, 2); +//! assert_eq!(restored.frames.len(), 10); +//! # Ok::<(), fortress_rollback::network::codec::CodecError>(()) +//! ``` +//! +//! [`SessionBuilder`]: crate::SessionBuilder +//! [`with_recording`]: crate::SessionBuilder::with_recording +//! [`P2PSession::into_replay`]: crate::P2PSession::into_replay +//! [`ReplaySession`]: crate::sessions::replay_session::ReplaySession + +use std::fmt; + +use serde::{de::DeserializeOwned, Deserialize, Serialize}; + +use crate::error::InvalidRequestKind; +use crate::network::codec::{self, CodecResult}; +use crate::FortressResult; + +/// A recorded match that can be played back deterministically. +/// +/// Contains all confirmed inputs per frame along with optional checksums +/// for validation. The inner `Vec` of each frame entry contains one input +/// per player, ordered by player handle index. +/// +/// # Type Parameter +/// +/// `I` is the input type, which must satisfy the same bounds as +/// [`Config::Input`]: `Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned`. +/// When the `sync-send` feature is enabled, `Send + Sync` bounds are additionally required. +/// +/// # Example +/// +/// ``` +/// use fortress_rollback::replay::{Replay, ReplayMetadata}; +/// use serde::{Deserialize, Serialize}; +/// +/// #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] +/// struct GameInput { direction: u8 } +/// +/// let replay = Replay:: { +/// num_players: 2, +/// frames: vec![vec![GameInput::default(); 2]; 60], +/// checksums: vec![None; 60], +/// metadata: ReplayMetadata { +/// library_version: env!("CARGO_PKG_VERSION").to_string(), +/// num_players: 2, +/// total_frames: 60, +/// skipped_frames: 0, +/// }, +/// }; +/// assert_eq!(replay.total_frames(), 60); +/// ``` +/// +/// [`Config::Input`]: crate::Config::Input +#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)] // derive-bounds:ok(Eq via Config::Input) +pub struct Replay { + /// The number of players in this recorded match. + pub num_players: usize, + /// Confirmed inputs per frame. Each inner `Vec` has one entry per player. + pub frames: Vec>, + /// Optional per-frame checksums for desync detection during playback. + /// + /// When recording is enabled on a P2P session, checksums are captured + /// from the saved game state at each confirmed frame. During validation + /// playback (via [`ReplaySession::new_with_validation`]), these checksums + /// are compared against freshly computed checksums to detect + /// non-determinism. + /// + /// [`ReplaySession::new_with_validation`]: crate::sessions::replay_session::ReplaySession::new_with_validation + pub checksums: Vec>, + /// Metadata about the replay. + pub metadata: ReplayMetadata, +} + +impl Replay +where + I: Serialize + DeserializeOwned, +{ + /// Serializes this replay to bytes using the deterministic bincode codec. + /// + /// # Errors + /// + /// Returns a [`CodecError`] if serialization fails. + /// + /// # Example + /// + /// ``` + /// use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// + /// let replay = Replay:: { + /// num_players: 2, + /// frames: vec![vec![0u8; 2]; 5], + /// checksums: vec![None; 5], + /// metadata: ReplayMetadata { + /// library_version: env!("CARGO_PKG_VERSION").to_string(), + /// num_players: 2, + /// total_frames: 5, + /// skipped_frames: 0, + /// }, + /// }; + /// let bytes = replay.to_bytes()?; + /// assert!(!bytes.is_empty()); + /// # Ok::<(), fortress_rollback::network::codec::CodecError>(()) + /// ``` + /// + /// [`CodecError`]: crate::network::codec::CodecError + pub fn to_bytes(&self) -> CodecResult> { + codec::encode(self) + } + + /// Deserializes a replay from bytes using the deterministic bincode codec. + /// + /// # Errors + /// + /// Returns a [`CodecError`] if deserialization fails. + /// + /// # Example + /// + /// ``` + /// use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// + /// let replay = Replay:: { + /// num_players: 2, + /// frames: vec![vec![0u8; 2]; 5], + /// checksums: vec![None; 5], + /// metadata: ReplayMetadata { + /// library_version: env!("CARGO_PKG_VERSION").to_string(), + /// num_players: 2, + /// total_frames: 5, + /// skipped_frames: 0, + /// }, + /// }; + /// let bytes = replay.to_bytes()?; + /// let restored = Replay::::from_bytes(&bytes)?; + /// assert_eq!(restored.num_players, 2); + /// # Ok::<(), fortress_rollback::network::codec::CodecError>(()) + /// ``` + /// + /// [`CodecError`]: crate::network::codec::CodecError + pub fn from_bytes(bytes: &[u8]) -> CodecResult { + codec::decode_value(bytes) + } +} + +impl Replay { + /// Returns the total number of recorded frames. + /// + /// # Example + /// + /// ``` + /// use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// + /// let replay = Replay:: { + /// num_players: 1, + /// frames: vec![vec![0u8]; 42], + /// checksums: vec![None; 42], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 1, + /// total_frames: 42, + /// skipped_frames: 0, + /// }, + /// }; + /// assert_eq!(replay.total_frames(), 42); + /// ``` + #[must_use] + pub fn total_frames(&self) -> usize { + self.frames.len() + } + + /// Validates the internal consistency of this replay. + /// + /// Checks that: + /// - `frames.len() == checksums.len()` + /// - All frames have exactly `num_players` inputs + /// - `metadata.num_players == num_players` + /// - `metadata.total_frames == frames.len()` + /// + /// # Errors + /// + /// Returns [`InvalidRequestKind::Custom`] if any consistency check fails. + /// + /// # Example + /// + /// ``` + /// use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// + /// let replay = Replay:: { + /// num_players: 2, + /// frames: vec![vec![0u8, 1u8]; 5], + /// checksums: vec![None; 5], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 2, + /// total_frames: 5, + /// skipped_frames: 0, + /// }, + /// }; + /// replay.validate()?; + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + pub fn validate(&self) -> FortressResult<()> { + if self.frames.len() != self.checksums.len() { + return Err(InvalidRequestKind::Custom( + "replay validation failed: frames.len() != checksums.len()", + ) + .into()); + } + for frame in &self.frames { + if frame.len() != self.num_players { + return Err(InvalidRequestKind::Custom( + "replay validation failed: frame does not have exactly num_players inputs", + ) + .into()); + } + } + if self.metadata.num_players != self.num_players { + return Err(InvalidRequestKind::Custom( + "replay validation failed: metadata.num_players != num_players", + ) + .into()); + } + if self.metadata.total_frames != self.frames.len() { + return Err(InvalidRequestKind::Custom( + "replay validation failed: metadata.total_frames != frames.len()", + ) + .into()); + } + Ok(()) + } +} + +impl fmt::Display for Replay { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!( + f, + "Replay({} players, {} frames, v{})", + self.num_players, + self.frames.len(), + self.metadata.library_version, + ) + } +} + +impl fmt::Display for ReplayMetadata { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + if self.skipped_frames > 0 { + write!( + f, + "ReplayMetadata({} players, {} frames, {} skipped, v{})", + self.num_players, self.total_frames, self.skipped_frames, self.library_version, + ) + } else { + write!( + f, + "ReplayMetadata({} players, {} frames, v{})", + self.num_players, self.total_frames, self.library_version, + ) + } + } +} + +/// Metadata about a recorded replay. +/// +/// Contains information about the library version, player count, and +/// total frames for compatibility checks and display purposes. +/// +/// # Example +/// +/// ``` +/// use fortress_rollback::replay::ReplayMetadata; +/// +/// let meta = ReplayMetadata { +/// library_version: env!("CARGO_PKG_VERSION").to_string(), +/// num_players: 2, +/// total_frames: 300, +/// skipped_frames: 0, +/// }; +/// assert_eq!(meta.num_players, 2); +/// ``` +#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)] +pub struct ReplayMetadata { + /// The version of the fortress-rollback library used to record this replay. + pub library_version: String, + /// The number of players in the recorded match. + /// + /// **Note:** This must be consistent with [`Replay::num_players`]. The + /// [`Replay::validate`] method checks this invariant. + pub num_players: usize, + /// The total number of frames in the replay. + /// + /// **Note:** This must be consistent with `Replay::frames.len()`. The + /// [`Replay::validate`] method checks this invariant. This field is + /// useful when metadata is serialized independently without loading + /// all frame data. + pub total_frames: usize, + /// The number of frames that were skipped during recording due to + /// input retrieval failures. + /// + /// When a frame's confirmed inputs cannot be retrieved (e.g., because + /// the frame was already discarded from the input queue), default + /// placeholder inputs are recorded instead and this counter is + /// incremented. A non-zero value indicates the replay has gaps where + /// the real inputs were unavailable. + /// + /// Defaults to `0` when deserializing replays that were recorded before + /// this field was added. + #[serde(default)] + pub skipped_frames: usize, +} + +/// Accumulates confirmed inputs during a P2P session for replay recording. +/// +/// This is an internal type used by [`P2PSession`] when recording is enabled. +/// It tracks confirmed inputs frame by frame and can produce a [`Replay`] +/// when the session ends. +/// +/// [`P2PSession`]: crate::P2PSession +#[derive(Clone, Debug)] +pub(crate) struct ReplayRecorder { + num_players: usize, + frames: Vec>, + checksums: Vec>, + skipped_frames: usize, +} + +impl ReplayRecorder { + /// Creates a new recorder for the given number of players. + pub(crate) fn new(num_players: usize) -> Self { + Self { + num_players, + frames: Vec::new(), + checksums: Vec::new(), + skipped_frames: 0, + } + } + + /// Records a single frame's confirmed inputs. + pub(crate) fn record_frame(&mut self, inputs: Vec, checksum: Option) { + self.frames.push(inputs); + self.checksums.push(checksum); + } + + /// Records a skipped frame with default placeholder inputs and no checksum. + /// + /// This maintains frame index alignment in the replay when the real + /// inputs for a frame could not be retrieved. The `skipped_frames` + /// counter is incremented so consumers can detect recording gaps. + pub(crate) fn record_skipped_frame(&mut self) + where + I: Default + Clone, + { + self.frames.push(vec![I::default(); self.num_players]); + self.checksums.push(None); + self.skipped_frames = self.skipped_frames.saturating_add(1); + } + + /// Returns the number of frames recorded so far. + #[cfg(test)] + pub(crate) fn recorded_frames(&self) -> usize { + self.frames.len() + } + + /// Returns the number of skipped frames recorded so far. + #[cfg(test)] + pub(crate) fn skipped_frames(&self) -> usize { + self.skipped_frames + } + + /// Consumes this recorder and produces a [`Replay`]. + pub(crate) fn into_replay(self) -> Replay { + let total_frames = self.frames.len(); + Replay { + num_players: self.num_players, + frames: self.frames, + checksums: self.checksums, + metadata: ReplayMetadata { + library_version: env!("CARGO_PKG_VERSION").to_string(), + num_players: self.num_players, + total_frames, + skipped_frames: self.skipped_frames, + }, + } + } +} + +#[cfg(test)] +#[allow( + clippy::panic, + clippy::unwrap_used, + clippy::expect_used, + clippy::indexing_slicing +)] +mod tests { + use super::*; + + #[test] + fn replay_construction_basic() { + let replay = Replay:: { + num_players: 2, + frames: vec![vec![1, 2], vec![3, 4]], + checksums: vec![None, Some(42)], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 2, + total_frames: 2, + skipped_frames: 0, + }, + }; + assert_eq!(replay.num_players, 2); + assert_eq!(replay.total_frames(), 2); + assert_eq!(replay.frames[0], vec![1, 2]); + assert_eq!(replay.checksums[1], Some(42)); + } + + #[test] + fn replay_serialization_roundtrip() { + let replay = Replay:: { + num_players: 2, + frames: vec![vec![10, 20], vec![30, 40], vec![50, 60]], + checksums: vec![None, Some(123), None], + metadata: ReplayMetadata { + library_version: "0.7.0".to_string(), + num_players: 2, + total_frames: 3, + skipped_frames: 0, + }, + }; + + let bytes = replay.to_bytes().unwrap(); + let restored = Replay::::from_bytes(&bytes).unwrap(); + + assert_eq!(restored.num_players, replay.num_players); + assert_eq!(restored.frames, replay.frames); + assert_eq!(restored.checksums, replay.checksums); + assert_eq!( + restored.metadata.library_version, + replay.metadata.library_version + ); + assert_eq!(restored.metadata.total_frames, replay.metadata.total_frames); + } + + #[test] + fn replay_serialization_empty() { + let replay = Replay:: { + num_players: 1, + frames: vec![], + checksums: vec![], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 1, + total_frames: 0, + skipped_frames: 0, + }, + }; + + let bytes = replay.to_bytes().unwrap(); + let restored = Replay::::from_bytes(&bytes).unwrap(); + assert_eq!(restored.total_frames(), 0); + assert!(restored.frames.is_empty()); + } + + #[test] + fn replay_from_invalid_bytes_fails() { + let result = Replay::::from_bytes(&[0xFF, 0xFF, 0xFF]); + assert!(result.is_err()); + } + + #[test] + fn replay_metadata_fields() { + let meta = ReplayMetadata { + library_version: "1.2.3".to_string(), + num_players: 4, + total_frames: 1000, + skipped_frames: 0, + }; + assert_eq!(meta.library_version, "1.2.3"); + assert_eq!(meta.num_players, 4); + assert_eq!(meta.total_frames, 1000); + } + + #[test] + fn replay_recorder_basic() { + let mut recorder = ReplayRecorder::::new(2); + assert_eq!(recorder.recorded_frames(), 0); + + recorder.record_frame(vec![1, 2], None); + recorder.record_frame(vec![3, 4], Some(99)); + assert_eq!(recorder.recorded_frames(), 2); + + let replay = recorder.into_replay(); + assert_eq!(replay.num_players, 2); + assert_eq!(replay.total_frames(), 2); + assert_eq!(replay.frames[0], vec![1, 2]); + assert_eq!(replay.frames[1], vec![3, 4]); + assert_eq!(replay.checksums[0], None); + assert_eq!(replay.checksums[1], Some(99)); + assert_eq!(replay.metadata.num_players, 2); + assert_eq!(replay.metadata.total_frames, 2); + } + + #[test] + fn replay_recorder_empty_into_replay() { + let recorder = ReplayRecorder::::new(3); + let replay = recorder.into_replay(); + assert_eq!(replay.num_players, 3); + assert_eq!(replay.total_frames(), 0); + assert!(replay.frames.is_empty()); + } + + #[test] + fn replay_clone() { + let replay = Replay:: { + num_players: 2, + frames: vec![vec![1, 2]], + checksums: vec![None], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 2, + total_frames: 1, + skipped_frames: 0, + }, + }; + let cloned = replay.clone(); + assert_eq!(cloned.num_players, replay.num_players); + assert_eq!(cloned.frames, replay.frames); + } + + #[test] + fn replay_serialization_deterministic() { + let replay = Replay:: { + num_players: 2, + frames: vec![vec![10, 20], vec![30, 40]], + checksums: vec![None, Some(42)], + metadata: ReplayMetadata { + library_version: "0.7.0".to_string(), + num_players: 2, + total_frames: 2, + skipped_frames: 0, + }, + }; + + let bytes1 = replay.to_bytes().unwrap(); + let bytes2 = replay.to_bytes().unwrap(); + assert_eq!(bytes1, bytes2, "Replay serialization must be deterministic"); + } + + #[test] + fn validate_valid_replay_succeeds() { + let replay = Replay:: { + num_players: 2, + frames: vec![vec![1, 2], vec![3, 4]], + checksums: vec![None, None], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 2, + total_frames: 2, + skipped_frames: 0, + }, + }; + replay.validate().unwrap(); + } + + #[test] + fn validate_empty_replay_succeeds() { + let replay = Replay:: { + num_players: 1, + frames: vec![], + checksums: vec![], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 1, + total_frames: 0, + skipped_frames: 0, + }, + }; + replay.validate().unwrap(); + } + + #[test] + fn validate_frames_checksums_mismatch_fails() { + let replay = Replay:: { + num_players: 1, + frames: vec![vec![1], vec![2]], + checksums: vec![None], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 1, + total_frames: 2, + skipped_frames: 0, + }, + }; + assert!(replay.validate().is_err()); + } + + #[test] + fn validate_wrong_num_inputs_per_frame_fails() { + let replay = Replay:: { + num_players: 2, + frames: vec![vec![1, 2], vec![3]], // second frame has only 1 input + checksums: vec![None, None], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 2, + total_frames: 2, + skipped_frames: 0, + }, + }; + assert!(replay.validate().is_err()); + } + + #[test] + fn validate_metadata_num_players_mismatch_fails() { + let replay = Replay:: { + num_players: 2, + frames: vec![vec![1, 2]], + checksums: vec![None], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 3, // mismatch + total_frames: 1, + skipped_frames: 0, + }, + }; + assert!(replay.validate().is_err()); + } + + #[test] + fn validate_metadata_total_frames_mismatch_fails() { + let replay = Replay:: { + num_players: 1, + frames: vec![vec![1]], + checksums: vec![None], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 1, + total_frames: 99, // mismatch + skipped_frames: 0, + }, + }; + assert!(replay.validate().is_err()); + } + + #[test] + fn replay_partial_eq() { + let replay1 = Replay:: { + num_players: 2, + frames: vec![vec![1, 2]], + checksums: vec![None], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 2, + total_frames: 1, + skipped_frames: 0, + }, + }; + let replay2 = replay1.clone(); + assert_eq!(replay1, replay2); + } + + #[test] + fn replay_display() { + let replay = Replay:: { + num_players: 2, + frames: vec![vec![1, 2]; 10], + checksums: vec![None; 10], + metadata: ReplayMetadata { + library_version: "0.7.0".to_string(), + num_players: 2, + total_frames: 10, + skipped_frames: 0, + }, + }; + let display = format!("{}", replay); + assert!(display.contains("2 players")); + assert!(display.contains("10 frames")); + assert!(display.contains("0.7.0")); + } + + #[test] + fn replay_metadata_display() { + let meta = ReplayMetadata { + library_version: "1.0.0".to_string(), + num_players: 4, + total_frames: 100, + skipped_frames: 0, + }; + let display = format!("{}", meta); + assert!(display.contains("4 players")); + assert!(display.contains("100 frames")); + assert!(display.contains("1.0.0")); + } + + #[test] + fn replay_metadata_partial_eq() { + let meta1 = ReplayMetadata { + library_version: "test".to_string(), + num_players: 2, + total_frames: 10, + skipped_frames: 0, + }; + let meta2 = meta1.clone(); + assert_eq!(meta1, meta2); + } +} diff --git a/src/sessions/builder.rs b/src/sessions/builder.rs index 66c896e0..25204d98 100644 --- a/src/sessions/builder.rs +++ b/src/sessions/builder.rs @@ -6,8 +6,10 @@ use web_time::Duration; use crate::{ error::{InvalidRequestKind, SerializationErrorKind}, network::protocol::UdpProtocol, + replay::Replay, sessions::player_registry::PlayerRegistry, - telemetry::ViolationObserver, + sessions::replay_session::ReplaySession, + telemetry::{SessionTelemetry, ViolationObserver}, time_sync::TimeSyncConfig, Config, DesyncDetection, FortressError, NonBlockingSocket, P2PSession, PlayerHandle, PlayerType, SpectatorSession, SyncTestSession, @@ -101,6 +103,10 @@ where input_queue_config: InputQueueConfig, /// Maximum number of events to queue before oldest are dropped. event_queue_size: usize, + /// Whether to enable replay recording during P2P sessions. + recording: bool, + /// Optional telemetry observer for session performance events. + telemetry: Option>, } impl std::fmt::Debug for SessionBuilder { @@ -128,6 +134,8 @@ impl std::fmt::Debug for SessionBuilder { time_sync_config, input_queue_config, event_queue_size, + recording, + telemetry, } = self; f.debug_struct("SessionBuilder") @@ -145,12 +153,14 @@ impl std::fmt::Debug for SessionBuilder { .field("max_frames_behind", max_frames_behind) .field("catchup_speed", catchup_speed) .field("has_violation_observer", &violation_observer.is_some()) + .field("has_telemetry", &telemetry.is_some()) .field("sync_config", sync_config) .field("protocol_config", protocol_config) .field("spectator_config", spectator_config) .field("time_sync_config", time_sync_config) .field("input_queue_config", input_queue_config) .field("event_queue_size", event_queue_size) + .field("recording", recording) .finish() } } @@ -185,6 +195,8 @@ impl SessionBuilder { time_sync_config: TimeSyncConfig::default(), input_queue_config: InputQueueConfig::default(), event_queue_size: DEFAULT_EVENT_QUEUE_SIZE, + recording: false, + telemetry: None, } } @@ -678,6 +690,36 @@ impl SessionBuilder { Ok(self) } + /// Enables or disables replay recording during a P2P session. + /// + /// When recording is enabled, the [`P2PSession`] will capture all confirmed + /// inputs as they are processed. After the session ends, call + /// [`P2PSession::into_replay`] to extract the recorded [`Replay`]. + /// + /// Recording is disabled by default. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::prelude::*; + /// # use std::net::SocketAddr; + /// # #[derive(Debug)] + /// # struct TestConfig; + /// # impl Config for TestConfig { + /// # type Input = u8; + /// # type State = u8; + /// # type Address = SocketAddr; + /// # } + /// let builder = SessionBuilder::::new() + /// .with_recording(true); + /// ``` + /// + /// [`Replay`]: crate::replay::Replay + pub fn with_recording(mut self, enabled: bool) -> Self { + self.recording = enabled; + self + } + /// Sets the FPS this session is used with. This influences estimations for frame synchronization between sessions. /// # Errors /// - Returns a [`FortressError`] if the fps is 0 @@ -764,6 +806,36 @@ impl SessionBuilder { self } + /// Attaches a telemetry observer to receive session performance events. + /// + /// The telemetry observer will receive callbacks for rollbacks, prediction + /// misses, frame advances, and network statistics during P2P sessions. + /// + /// # Example + /// + /// ``` + /// use fortress_rollback::prelude::*; + /// use fortress_rollback::telemetry::{CollectingTelemetry, SessionTelemetry}; + /// use std::sync::Arc; + /// + /// # struct MyConfig; + /// # impl Config for MyConfig { + /// # type Input = u8; + /// # type State = (); + /// # type Address = std::net::SocketAddr; + /// # } + /// let telemetry = Arc::new(CollectingTelemetry::new()); + /// let builder = SessionBuilder::::new() + /// .with_telemetry(telemetry.clone()); + /// + /// // After session operations, inspect telemetry events + /// // assert!(telemetry.events().is_empty()); + /// ``` + pub fn with_telemetry(mut self, telemetry: Arc) -> Self { + self.telemetry = Some(telemetry); + self + } + // ========================================================================= // Session Presets // ========================================================================= @@ -965,6 +1037,8 @@ impl SessionBuilder { self.protocol_config, self.input_queue_config.queue_length, self.event_queue_size, + self.recording, + self.telemetry, )) } @@ -1040,6 +1114,116 @@ impl SessionBuilder { )) } + /// Creates a replay playback session from a recorded [`Replay`]. + /// + /// The returned [`ReplaySession`] will play back the recorded inputs + /// frame by frame when [`advance_frame`](crate::Session::advance_frame) + /// is called. No network, save/load, or local input is needed. + /// + /// The builder is consumed but most configuration is ignored since + /// replay playback does not require networking or synchronization. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::prelude::*; + /// # use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// # use std::net::SocketAddr; + /// # #[derive(Debug)] + /// # struct TestConfig; + /// # impl Config for TestConfig { + /// # type Input = u8; + /// # type State = u8; + /// # type Address = SocketAddr; + /// # } + /// let replay = Replay:: { + /// num_players: 2, + /// frames: vec![vec![0, 0]; 10], + /// checksums: vec![None; 10], + /// metadata: ReplayMetadata { + /// library_version: env!("CARGO_PKG_VERSION").to_string(), + /// num_players: 2, + /// total_frames: 10, + /// skipped_frames: 0, + /// }, + /// }; + /// let session = SessionBuilder::::new() + /// .start_replay_session(replay)?; + /// assert!(!session.is_complete()); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + /// + /// [`Replay`]: crate::replay::Replay + /// [`ReplaySession`]: crate::sessions::replay_session::ReplaySession + /// + /// # Errors + /// + /// Returns an error if the replay fails internal consistency validation + /// (see [`Replay::validate`]). + pub fn start_replay_session( + self, + replay: Replay, + ) -> crate::FortressResult> { + ReplaySession::new(replay) + } + + /// Creates a replay playback session with checksum validation enabled. + /// + /// When validation is enabled, the session emits [`FortressRequest::SaveGameState`] + /// requests before each [`FortressRequest::AdvanceFrame`], allowing the application + /// to compute checksums. These checksums are compared against the checksums stored + /// in the replay to detect non-determinism. + /// + /// If a mismatch is detected, a [`FortressEvent::ReplayDesync`] event is emitted + /// with the frame number and both checksums. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::prelude::*; + /// # use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// # use std::net::SocketAddr; + /// # #[derive(Debug)] + /// # struct TestConfig; + /// # impl Config for TestConfig { + /// # type Input = u8; + /// # type State = u8; + /// # type Address = SocketAddr; + /// # } + /// let replay = Replay:: { + /// num_players: 2, + /// frames: vec![vec![0, 0]; 10], + /// checksums: vec![Some(0x1234); 10], + /// metadata: ReplayMetadata { + /// library_version: env!("CARGO_PKG_VERSION").to_string(), + /// num_players: 2, + /// total_frames: 10, + /// skipped_frames: 0, + /// }, + /// }; + /// let session = SessionBuilder::::new() + /// .start_replay_session_with_validation(replay)?; + /// assert!(!session.is_complete()); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + /// + /// [`Replay`]: crate::replay::Replay + /// [`ReplaySession`]: crate::sessions::replay_session::ReplaySession + /// [`FortressRequest::SaveGameState`]: crate::FortressRequest::SaveGameState + /// [`FortressRequest::AdvanceFrame`]: crate::FortressRequest::AdvanceFrame + /// [`FortressEvent::ReplayDesync`]: crate::FortressEvent::ReplayDesync + /// + /// # Errors + /// + /// Returns an error if the replay fails internal consistency validation + /// (see [`Replay::validate`]). + pub fn start_replay_session_with_validation( + self, + replay: Replay, + ) -> crate::FortressResult> { + ReplaySession::new_with_validation(replay) + } + fn create_endpoint( &self, handles: Vec, @@ -1079,7 +1263,7 @@ mod tests { use std::net::SocketAddr; #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct TestInput { inp: u8, } @@ -1521,4 +1705,42 @@ mod tests { assert_eq!(builder.max_prediction, 6); assert_eq!(builder.desync_detection, DesyncDetection::Off); } + + #[test] + fn builder_start_replay_session_with_validation() { + use crate::replay::{Replay, ReplayMetadata}; + + let replay = Replay:: { + num_players: 2, + frames: vec![ + vec![TestInput { inp: 0 }, TestInput { inp: 1 }], + vec![TestInput { inp: 2 }, TestInput { inp: 3 }], + ], + checksums: vec![Some(0x1234), Some(0x5678)], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players: 2, + total_frames: 2, + skipped_frames: 0, + }, + }; + + let mut session = SessionBuilder::::new() + .start_replay_session_with_validation(replay) + .unwrap(); + + assert!(!session.is_complete()); + + // Validation mode should emit SaveGameState before AdvanceFrame + let requests = session.advance_frame().unwrap(); + assert_eq!(requests.len(), 2); + assert!(matches!( + &requests[0], + crate::FortressRequest::SaveGameState { .. } + )); + assert!(matches!( + &requests[1], + crate::FortressRequest::AdvanceFrame { .. } + )); + } } diff --git a/src/sessions/p2p_session.rs b/src/sessions/p2p_session.rs index adc4b23d..3a1fc05f 100644 --- a/src/sessions/p2p_session.rs +++ b/src/sessions/p2p_session.rs @@ -2,13 +2,15 @@ use crate::error::{FortressError, InternalErrorKind, InvalidRequestKind}; use crate::frame_info::PlayerInput; use crate::network::messages::ConnectionStatus; use crate::network::network_stats::NetworkStats; +use crate::replay::{Replay, ReplayRecorder}; use crate::sessions::config::{ProtocolConfig, SaveMode}; use crate::sessions::player_registry::PlayerRegistry; use crate::sessions::session_trait::Session; use crate::sessions::sync_health::SyncHealth; use crate::sync_layer::SyncLayer; use crate::telemetry::{ - InvariantChecker, InvariantViolation, ViolationKind, ViolationObserver, ViolationSeverity, + InvariantChecker, InvariantViolation, SessionTelemetry, ViolationKind, ViolationObserver, + ViolationSeverity, }; use crate::DesyncDetection; use crate::HandleVec; @@ -108,10 +110,16 @@ where last_verified_frame: Option, /// Optional observer for specification violations. violation_observer: Option>, + /// Optional telemetry observer for session performance events. + telemetry: Option>, /// Protocol configuration for network behavior. protocol_config: ProtocolConfig, /// Maximum number of events to queue before oldest are dropped. max_event_queue_size: usize, + /// Optional replay recorder for capturing confirmed inputs. + recording: Option>, + /// The last frame recorded to the replay recorder. + last_recorded_frame: Frame, } impl P2PSession { @@ -133,6 +141,8 @@ impl P2PSession { protocol_config: ProtocolConfig, queue_length: usize, event_queue_size: usize, + recording: bool, + telemetry: Option>, ) -> Self { // local connection status let local_connect_status = vec![ConnectionStatus::default(); num_players]; @@ -196,8 +206,11 @@ impl P2PSession { last_sent_checksum_frame: Frame::NULL, last_verified_frame: None, violation_observer, + telemetry, protocol_config, max_event_queue_size: event_queue_size, + recording: recording.then(|| ReplayRecorder::new(num_players)), + last_recorded_frame: Frame::NULL, } } @@ -301,6 +314,14 @@ impl P2PSession { .check_simulation_consistency(self.disconnect_frame); // if we have an incorrect frame, then we need to rollback if first_incorrect != Frame::NULL { + if let Some(telemetry) = &self.telemetry { + for (player, frame) in self + .sync_layer + .players_with_incorrect_predictions(self.disconnect_frame) + { + telemetry.on_prediction_miss(player, frame); + } + } self.adjust_gamestate(first_incorrect, confirmed_frame, &mut requests)?; self.disconnect_frame = Frame::NULL; } @@ -322,6 +343,9 @@ impl P2PSession { // send confirmed inputs to spectators before throwing them away self.send_confirmed_inputs_to_spectators(confirmed_frame)?; + // record confirmed inputs to the replay recorder before they are discarded + self.record_confirmed_inputs(confirmed_frame); + // set the last confirmed frame and discard all saved inputs before that frame self.sync_layer .set_last_confirmed_frame(confirmed_frame, self.save_mode); @@ -416,6 +440,10 @@ impl P2PSession { // clear the local inputs after advancing the frame to allow new inputs to be ingested self.local_inputs.clear(); requests.push(FortressRequest::AdvanceFrame { inputs }); + + if let Some(telemetry) = &self.telemetry { + telemetry.on_frame_advance(self.sync_layer.current_frame()); + } } else { debug!( "Prediction Threshold reached. Skipping on frame {}", @@ -469,6 +497,19 @@ impl P2PSession { self.handle_event(event, handles, addr); } + // emit network stats telemetry for each running remote endpoint + if let Some(telemetry) = &self.telemetry { + for endpoint in self.player_reg.remotes.values() { + if endpoint.is_running() { + if let Ok(stats) = endpoint.network_stats() { + for &handle in endpoint.handles().iter() { + telemetry.on_network_stats(handle, &stats); + } + } + } + } + } + // send all queued packets for endpoint in self.player_reg.remotes.values_mut() { endpoint.send_all_messages(&mut self.socket); @@ -760,6 +801,142 @@ impl P2PSession { self.player_reg.num_spectators() } + /// Returns `true` if replay recording is enabled for this session. + /// + /// Recording is enabled via [`SessionBuilder::with_recording`]. + /// + /// # Example + /// + /// ```ignore + /// if session.is_recording() { + /// println!("Recording inputs for replay"); + /// } + /// ``` + /// + /// [`SessionBuilder::with_recording`]: crate::SessionBuilder::with_recording + #[must_use] + pub fn is_recording(&self) -> bool { + self.recording.is_some() + } + + /// Consumes this session and returns the recorded [`Replay`], if recording + /// was enabled. + /// + /// Returns `Ok(Replay)` if recording was enabled via + /// [`SessionBuilder::with_recording`], or an error if recording was not + /// enabled. + /// + /// # Errors + /// + /// Returns [`InvalidRequestKind::NotSupported`] if recording was not enabled. + /// + /// # Example + /// + /// ```ignore + /// let replay = session.into_replay()?; + /// let bytes = replay.to_bytes()?; + /// std::fs::write("match.replay", bytes)?; + /// ``` + /// + /// [`SessionBuilder::with_recording`]: crate::SessionBuilder::with_recording + pub fn into_replay(self) -> FortressResult> { + self.recording + .map(ReplayRecorder::into_replay) + .ok_or_else(|| { + InvalidRequestKind::NotSupported { + operation: "into_replay (recording not enabled)", + } + .into() + }) + } + + /// Extracts the recorded [`Replay`] without consuming the session. + /// + /// After calling this, the session continues but recording is disabled + /// (the recorder has been taken). + /// + /// # Errors + /// + /// Returns [`InvalidRequestKind::NotSupported`] if recording was not enabled + /// or has already been taken. + /// + /// # Example + /// + /// ```ignore + /// let replay = session.take_replay()?; + /// let bytes = replay.to_bytes()?; + /// // Session continues without recording + /// ``` + pub fn take_replay(&mut self) -> FortressResult> { + self.recording + .take() + .map(ReplayRecorder::into_replay) + .ok_or_else(|| { + InvalidRequestKind::NotSupported { + operation: "take_replay (recording not enabled or already taken)", + } + .into() + }) + } + + /// Records confirmed inputs up to the given frame into the replay recorder. + /// + /// When a frame's inputs cannot be retrieved (e.g., because the frame was + /// already discarded from the input queue), default placeholder inputs are + /// recorded to maintain frame index alignment, and the recorder's + /// `skipped_frames` counter is incremented. Recording continues with + /// subsequent frames rather than stopping at the first failure. + fn record_confirmed_inputs(&mut self, confirmed_frame: Frame) { + if self.recording.is_none() { + return; + } + + // Collect inputs first, then record them, to avoid overlapping borrows. + // Entries are tagged as either real inputs or skipped placeholders. + let mut frames_to_record: Vec<(Frame, Option>)> = Vec::new(); + let mut frame_to_record = self.last_recorded_frame.saturating_next(); + while frame_to_record <= confirmed_frame { + match self.confirmed_inputs_for_frame(frame_to_record) { + Ok(inputs) => { + frames_to_record.push((frame_to_record, Some(inputs))); + }, + Err(err) => { + // If we can't get inputs for this frame, record a placeholder + // and continue collecting subsequent frames. This maintains + // frame index alignment in the replay. + report_violation!( + ViolationSeverity::Warning, + ViolationKind::InputQueue, + "record_confirmed_inputs: failed to get inputs for frame {} (skipping): {}", + frame_to_record, + err + ); + frames_to_record.push((frame_to_record, None)); + }, + } + frame_to_record = frame_to_record.saturating_next(); + } + + // Now record all collected frames + if let Some(recorder) = self.recording.as_mut() { + for (frame, maybe_inputs) in frames_to_record { + match maybe_inputs { + Some(inputs) => { + let checksum = self + .sync_layer + .saved_state_by_frame(frame) + .and_then(|cell| cell.checksum()); + recorder.record_frame(inputs, checksum); + }, + None => { + recorder.record_skipped_frame(); + }, + } + self.last_recorded_frame = frame; + } + } + } + /// Returns an iterator over local player handles. /// /// This is a zero-allocation alternative to [`local_player_handles`]. @@ -1130,6 +1307,20 @@ impl P2PSession { self.violation_observer.as_ref() } + /// Returns a reference to the telemetry observer, if one is attached. + /// + /// In tests, the typical pattern is to keep a separate + /// Arc<[CollectingTelemetry]> clone and call methods on it directly, + /// rather than going through this accessor. This method is primarily useful + /// when you need to confirm that a telemetry observer is attached or to pass + /// the trait object to other code. + /// + /// [CollectingTelemetry]: crate::telemetry::CollectingTelemetry + #[must_use] + pub fn telemetry(&self) -> Option<&Arc> { + self.telemetry.as_ref() + } + /// Returns the synchronization health status for a specific remote peer. /// /// This is the **primary API for checking if a session is synchronized** before @@ -1420,6 +1611,12 @@ impl P2PSession { let count = current_frame - frame_to_load; + if let Ok(depth) = usize::try_from(count) { + if let Some(telemetry) = &self.telemetry { + telemetry.on_rollback(depth, frame_to_load); + } + } + // request to load that frame debug!( "Pushing request to load frame {} (current frame {})", @@ -1914,6 +2111,8 @@ impl fmt::Debug for P2PSession { .field("current_frame", &self.sync_layer.current_frame()) .field("frames_ahead", &self.frames_ahead) .field("desync_detection", &self.desync_detection) + .field("is_recording", &self.recording.is_some()) + .field("has_telemetry", &self.telemetry.is_some()) .finish_non_exhaustive() } } @@ -2935,4 +3134,109 @@ mod tests { assert_eq!(session.num_spectators(), 1); assert!(!session.spectator_handles().is_empty()); } + + // ========================================== + // Recording Tests + // ========================================== + + fn create_local_only_session_with_recording() -> P2PSession { + SessionBuilder::new() + .with_num_players(1) + .unwrap() + .add_player(PlayerType::Local, PlayerHandle::new(0)) + .expect("Failed to add player") + .with_recording(true) + .start_p2p_session(DummySocket) + .expect("Failed to create session") + } + + #[test] + fn is_recording_true_when_enabled() { + let session = create_local_only_session_with_recording(); + assert!(session.is_recording()); + } + + #[test] + fn is_recording_false_by_default() { + let session = create_local_only_session(); + assert!(!session.is_recording()); + } + + #[test] + fn into_replay_without_recording_returns_error() { + let session = create_local_only_session(); + let result = session.into_replay(); + assert!(result.is_err()); + } + + #[test] + fn take_replay_without_recording_returns_error() { + let mut session = create_local_only_session(); + let result = session.take_replay(); + assert!(result.is_err()); + } + + #[test] + fn record_confirmed_inputs_advances_past_failed_frame() { + // A fresh session with recording enabled: confirmed_frame() is Frame::NULL, + // so requesting inputs for Frame(0) will fail. The bug was that + // last_recorded_frame was never advanced on error, causing infinite retries. + let mut session = create_local_only_session_with_recording(); + assert_eq!(session.last_recorded_frame, Frame::NULL); + + // Attempt to record up to Frame(5). All frames should fail since no inputs + // have been confirmed, but last_recorded_frame must advance past ALL + // failed frames (continue, not break). + let target_frame = Frame::new(5); + session.record_confirmed_inputs(target_frame); + + // last_recorded_frame should have advanced to the target frame, + // because continue (not break) processes every frame in the range. + assert_eq!( + session.last_recorded_frame, target_frame, + "last_recorded_frame should advance to the target frame, was {:?}", + session.last_recorded_frame + ); + + // The recorder should have placeholder entries for all 6 frames + // (Frame(0) through Frame(5)), maintaining frame index alignment. + let recorder = session.recording.as_ref().unwrap(); + assert_eq!( + recorder.recorded_frames(), + 6, + "recorder should have placeholder entries for all attempted frames" + ); + + // All frames should be marked as skipped since none had real inputs. + assert_eq!( + recorder.skipped_frames(), + 6, + "all 6 frames should be counted as skipped" + ); + + // Calling again should not re-attempt the same frames (no infinite loop). + let previous = session.last_recorded_frame; + session.record_confirmed_inputs(target_frame); + assert_eq!( + session.last_recorded_frame, previous, + "last_recorded_frame should not change when called again with the same target" + ); + + // Verify the replay produced has correct metadata and frame alignment. + let replay = session.into_replay().unwrap(); + assert_eq!(replay.frames.len(), 6); + assert_eq!(replay.checksums.len(), 6); + assert_eq!(replay.metadata.skipped_frames, 6); + // All frames should have default (0) inputs since they were placeholders. + for frame_inputs in &replay.frames { + assert_eq!(frame_inputs.len(), 1); // 1 player + assert_eq!(frame_inputs[0], u8::default()); + } + // All checksums should be None for skipped frames. + for checksum in &replay.checksums { + assert_eq!(*checksum, None); + } + // Replay should pass validation since frame alignment is maintained. + replay.validate().unwrap(); + } } diff --git a/src/sessions/replay_session.rs b/src/sessions/replay_session.rs new file mode 100644 index 00000000..3e250913 --- /dev/null +++ b/src/sessions/replay_session.rs @@ -0,0 +1,1270 @@ +//! Replay playback session for deterministic match replay. +//! +//! [`ReplaySession`] plays back a recorded [`Replay`] frame by frame, +//! returning the confirmed inputs from each frame without requiring +//! network communication, save/load, or local input. +//! +//! # Example +//! +//! ``` +//! use fortress_rollback::replay::{Replay, ReplayMetadata}; +//! use fortress_rollback::sessions::replay_session::ReplaySession; +//! use fortress_rollback::Session; +//! use serde::{Deserialize, Serialize}; +//! use std::net::SocketAddr; +//! +//! #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] +//! struct MyInput { buttons: u8 } +//! +//! #[derive(Debug)] +//! struct ReplayConfig; +//! impl fortress_rollback::Config for ReplayConfig { +//! type Input = MyInput; +//! type State = Vec; +//! type Address = SocketAddr; +//! } +//! +//! let replay = Replay { +//! num_players: 2, +//! frames: vec![ +//! vec![MyInput { buttons: 1 }, MyInput { buttons: 2 }], +//! vec![MyInput { buttons: 3 }, MyInput { buttons: 4 }], +//! ], +//! checksums: vec![None; 2], +//! metadata: ReplayMetadata { +//! library_version: env!("CARGO_PKG_VERSION").to_string(), +//! num_players: 2, +//! total_frames: 2, +//! skipped_frames: 0, +//! }, +//! }; +//! +//! let mut session = ReplaySession::::new(replay)?; +//! assert!(!session.is_complete()); +//! +//! // Advance through each frame +//! let requests = session.advance_frame()?; +//! assert_eq!(requests.len(), 1); // One AdvanceFrame request +//! assert!(!session.is_complete()); +//! +//! let requests = session.advance_frame()?; +//! assert_eq!(requests.len(), 1); +//! assert!(session.is_complete()); +//! # Ok::<(), fortress_rollback::FortressError>(()) +//! ``` +//! +//! [`Replay`]: crate::replay::Replay +//! [`ReplaySession`]: crate::sessions::replay_session::ReplaySession + +use std::collections::VecDeque; +use std::fmt; + +use crate::replay::Replay; +use crate::sessions::session_trait::Session; +use crate::sync_layer::GameStateCell; +use crate::{ + Config, EventDrain, FortressError, FortressEvent, FortressRequest, FortressResult, Frame, + InputStatus, InputVec, InvalidRequestKind, PlayerHandle, RequestVec, SessionState, +}; + +/// A session that plays back a recorded [`Replay`] deterministically. +/// +/// This session type reads pre-recorded inputs from a [`Replay`] and +/// returns them as [`FortressRequest::AdvanceFrame`] requests, one frame +/// at a time. It does not require network communication, save/load +/// operations, or local input. +/// +/// # Not Supported +/// +/// Since replay sessions play back pre-recorded data: +/// - [`add_local_input`](Session::add_local_input) returns a "not supported" error +/// - [`local_player_handle_required`](Session::local_player_handle_required) returns a "not supported" error +/// - [`poll_remote_clients`](Session::poll_remote_clients) is a no-op +/// +/// # Example +/// +/// ``` +/// use fortress_rollback::replay::{Replay, ReplayMetadata}; +/// use fortress_rollback::sessions::replay_session::ReplaySession; +/// use fortress_rollback::{Config, Session, Frame}; +/// use serde::{Deserialize, Serialize}; +/// use std::net::SocketAddr; +/// +/// #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] +/// struct Input(u8); +/// +/// #[derive(Debug)] +/// struct Cfg; +/// impl Config for Cfg { +/// type Input = Input; +/// type State = (); +/// type Address = SocketAddr; +/// } +/// +/// let replay = Replay { +/// num_players: 1, +/// frames: vec![vec![Input(42)]], +/// checksums: vec![None], +/// metadata: ReplayMetadata { +/// library_version: String::new(), +/// num_players: 1, +/// total_frames: 1, +/// skipped_frames: 0, +/// }, +/// }; +/// +/// let mut session = ReplaySession::::new(replay)?; +/// assert_eq!(session.current_frame(), Frame::NULL); +/// assert_eq!(session.total_frames(), 1); +/// +/// let requests = session.advance_frame()?; +/// assert_eq!(session.current_frame(), Frame::new(0)); +/// assert!(session.is_complete()); +/// # Ok::<(), fortress_rollback::FortressError>(()) +/// ``` +/// +/// [`Replay`]: crate::replay::Replay +pub struct ReplaySession +where + T: Config, +{ + /// The replay data being played back. + replay: Replay, + /// The current frame index. Starts at NULL (-1) and increments on advance. + current_frame: Frame, + /// Event queue for desync detection and other events. + event_queue: VecDeque>, + /// Whether checksum validation is enabled. + validate_checksums: bool, + /// Pending validation cell from the previous frame's `SaveGameState` request. + /// Stored as `(frame, cell)` so we can compare the checksum after the user + /// has filled the cell. + pending_validation: Option<(Frame, GameStateCell)>, +} + +impl ReplaySession { + /// Creates a new [`ReplaySession`] from a recorded [`Replay`]. + /// + /// The session starts at [`Frame::NULL`] and will advance through each + /// recorded frame when [`advance_frame`](Session::advance_frame) is called. + /// + /// # Example + /// + /// ``` + /// use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// use fortress_rollback::sessions::replay_session::ReplaySession; + /// use fortress_rollback::{Config, Frame}; + /// use serde::{Deserialize, Serialize}; + /// use std::net::SocketAddr; + /// + /// #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] + /// struct Input(u8); + /// + /// #[derive(Debug)] + /// struct Cfg; + /// impl Config for Cfg { + /// type Input = Input; + /// type State = (); + /// type Address = SocketAddr; + /// } + /// + /// let replay = Replay { + /// num_players: 1, + /// frames: vec![vec![Input(0)]], + /// checksums: vec![None], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 1, + /// total_frames: 1, + /// skipped_frames: 0, + /// }, + /// }; + /// + /// let session = ReplaySession::::new(replay)?; + /// assert_eq!(session.current_frame(), Frame::NULL); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + /// + /// # Errors + /// + /// Returns an error if the replay fails internal consistency validation + /// (see [`Replay::validate`]). + pub fn new(replay: Replay) -> FortressResult { + replay.validate()?; + Ok(Self { + replay, + current_frame: Frame::NULL, + event_queue: VecDeque::new(), + validate_checksums: false, + pending_validation: None, + }) + } + + /// Creates a new [`ReplaySession`] with checksum validation enabled. + /// + /// When validation is enabled, the session emits [`FortressRequest::SaveGameState`] + /// requests before each [`FortressRequest::AdvanceFrame`], allowing the application + /// to compute checksums. These checksums are compared against the checksums stored + /// in the replay to detect non-determinism. + /// + /// If a mismatch is detected, a [`FortressEvent::ReplayDesync`] event is emitted + /// with the frame number and both checksums. + /// + /// # Example + /// + /// ``` + /// use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// use fortress_rollback::sessions::replay_session::ReplaySession; + /// use fortress_rollback::{Config, Frame, Session, FortressRequest}; + /// use serde::{Deserialize, Serialize}; + /// use std::net::SocketAddr; + /// + /// #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] + /// struct Input(u8); + /// + /// #[derive(Debug)] + /// struct Cfg; + /// impl Config for Cfg { + /// type Input = Input; + /// type State = (); + /// type Address = SocketAddr; + /// } + /// + /// let replay = Replay { + /// num_players: 1, + /// frames: vec![vec![Input(0)]], + /// checksums: vec![Some(0xABCD)], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 1, + /// total_frames: 1, + /// skipped_frames: 0, + /// }, + /// }; + /// + /// let mut session = ReplaySession::::new_with_validation(replay)?; + /// let requests = session.advance_frame()?; + /// // With validation, SaveGameState is emitted before AdvanceFrame + /// assert_eq!(requests.len(), 2); + /// assert!(matches!(requests[0], FortressRequest::SaveGameState { .. })); + /// assert!(matches!(requests[1], FortressRequest::AdvanceFrame { .. })); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + /// + /// # Errors + /// + /// Returns an error if the replay fails internal consistency validation + /// (see [`Replay::validate`]). + pub fn new_with_validation(replay: Replay) -> FortressResult { + replay.validate()?; + Ok(Self { + replay, + current_frame: Frame::NULL, + event_queue: VecDeque::new(), + validate_checksums: true, + pending_validation: None, + }) + } + + /// Checks and resolves any pending validation from the previous frame. + /// + /// If a [`FortressRequest::SaveGameState`] was issued on the previous frame, + /// this compares the checksum stored in the cell by the application against + /// the replay's recorded checksum. On mismatch, a [`FortressEvent::ReplayDesync`] + /// event is enqueued. + fn check_pending_validation(&mut self) { + if let Some((prev_frame, cell)) = self.pending_validation.take() { + let prev_index = prev_frame.try_as_usize().ok(); + let replay_checksum = prev_index + .and_then(|idx| self.replay.checksums.get(idx).copied()) + .flatten(); + let actual_checksum = cell.checksum(); + + if let (Some(expected), Some(actual)) = (replay_checksum, actual_checksum) { + if expected != actual { + self.event_queue.push_back(FortressEvent::ReplayDesync { + frame: prev_frame, + expected_checksum: expected, + actual_checksum: actual, + }); + } + } + } + } + + /// Returns `true` when a pending validation can be resolved immediately. + /// + /// This method is used by [`events`](Self::events) to make final-frame + /// desyncs observable in the common `while !is_complete()` loop pattern + /// without requiring an additional failing [`advance_frame`](Self::advance_frame) + /// call. + fn pending_validation_ready_to_check(&self) -> bool { + let Some((prev_frame, cell)) = self.pending_validation.as_ref() else { + return false; + }; + + let prev_index = prev_frame.try_as_usize().ok(); + let replay_checksum = prev_index + .and_then(|idx| self.replay.checksums.get(idx).copied()) + .flatten(); + + replay_checksum.is_none() || cell.checksum().is_some() + } + + /// Returns `true` if checksum validation mode is enabled. + /// + /// When validating, [`advance_frame`](Session::advance_frame) emits + /// [`FortressRequest::SaveGameState`] requests so the application can + /// provide checksums for comparison against the replay recording. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// # use fortress_rollback::sessions::replay_session::ReplaySession; + /// # use fortress_rollback::Config; + /// # use serde::{Deserialize, Serialize}; + /// # use std::net::SocketAddr; + /// # #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] + /// # struct Input(u8); + /// # #[derive(Debug)] + /// # struct Cfg; + /// # impl Config for Cfg { + /// # type Input = Input; + /// # type State = (); + /// # type Address = SocketAddr; + /// # } + /// let replay = Replay { + /// num_players: 1, + /// frames: vec![vec![Input(0)]], + /// checksums: vec![Some(0x1234)], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 1, + /// total_frames: 1, + /// skipped_frames: 0, + /// }, + /// }; + /// let normal = ReplaySession::::new(replay.clone())?; + /// assert!(!normal.is_validating()); + /// + /// let validating = ReplaySession::::new_with_validation(replay)?; + /// assert!(validating.is_validating()); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + #[must_use] + pub fn is_validating(&self) -> bool { + self.validate_checksums + } + + /// Returns the current frame of the replay session. + /// + /// Starts at [`Frame::NULL`] before the first [`advance_frame`](Session::advance_frame) + /// call. After the first advance, it will be `Frame::new(0)`. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// # use fortress_rollback::sessions::replay_session::ReplaySession; + /// # use fortress_rollback::{Config, Frame, Session}; + /// # use serde::{Deserialize, Serialize}; + /// # use std::net::SocketAddr; + /// # #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] + /// # struct Input(u8); + /// # #[derive(Debug)] + /// # struct Cfg; + /// # impl Config for Cfg { + /// # type Input = Input; + /// # type State = (); + /// # type Address = SocketAddr; + /// # } + /// let replay = Replay { + /// num_players: 1, + /// frames: vec![vec![Input(0)]], + /// checksums: vec![None], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 1, + /// total_frames: 1, + /// skipped_frames: 0, + /// }, + /// }; + /// let mut session = ReplaySession::::new(replay)?; + /// assert_eq!(session.current_frame(), Frame::NULL); + /// let _ = session.advance_frame()?; + /// assert_eq!(session.current_frame(), Frame::new(0)); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + #[must_use] + pub fn current_frame(&self) -> Frame { + self.current_frame + } + + /// Returns the total number of frames in the replay. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// # use fortress_rollback::sessions::replay_session::ReplaySession; + /// # use fortress_rollback::Config; + /// # use serde::{Deserialize, Serialize}; + /// # use std::net::SocketAddr; + /// # #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] + /// # struct Input(u8); + /// # #[derive(Debug)] + /// # struct Cfg; + /// # impl Config for Cfg { + /// # type Input = Input; + /// # type State = (); + /// # type Address = SocketAddr; + /// # } + /// let replay = Replay { + /// num_players: 1, + /// frames: vec![vec![Input(0)]; 100], + /// checksums: vec![None; 100], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 1, + /// total_frames: 100, + /// skipped_frames: 0, + /// }, + /// }; + /// let session = ReplaySession::::new(replay)?; + /// assert_eq!(session.total_frames(), 100); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + #[must_use] + pub fn total_frames(&self) -> usize { + self.replay.total_frames() + } + + /// Returns `true` if all frames in the replay have been played back. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// # use fortress_rollback::sessions::replay_session::ReplaySession; + /// # use fortress_rollback::{Config, Session}; + /// # use serde::{Deserialize, Serialize}; + /// # use std::net::SocketAddr; + /// # #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] + /// # struct Input(u8); + /// # #[derive(Debug)] + /// # struct Cfg; + /// # impl Config for Cfg { + /// # type Input = Input; + /// # type State = (); + /// # type Address = SocketAddr; + /// # } + /// let replay = Replay { + /// num_players: 1, + /// frames: vec![vec![Input(0)]], + /// checksums: vec![None], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 1, + /// total_frames: 1, + /// skipped_frames: 0, + /// }, + /// }; + /// let mut session = ReplaySession::::new(replay)?; + /// assert!(!session.is_complete()); + /// let _ = session.advance_frame()?; + /// assert!(session.is_complete()); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + #[must_use] + pub fn is_complete(&self) -> bool { + // current_frame starts at NULL (-1). After advancing through all frames, + // current_frame will be total_frames - 1 (0-indexed). + let total = self.replay.total_frames(); + if total == 0 { + return true; + } + // current_frame is the last frame we advanced to (0-indexed). + // If it equals total_frames - 1, we have played all frames. + self.current_frame.as_i32() >= 0 + && self + .current_frame + .try_as_usize() + .is_ok_and(|f| f + 1 >= total) + } + + /// Returns a reference to the underlying [`Replay`]. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// # use fortress_rollback::sessions::replay_session::ReplaySession; + /// # use fortress_rollback::Config; + /// # use serde::{Deserialize, Serialize}; + /// # use std::net::SocketAddr; + /// # #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] + /// # struct Input(u8); + /// # #[derive(Debug)] + /// # struct Cfg; + /// # impl Config for Cfg { + /// # type Input = Input; + /// # type State = (); + /// # type Address = SocketAddr; + /// # } + /// let replay = Replay { + /// num_players: 2, + /// frames: vec![], + /// checksums: vec![], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 2, + /// total_frames: 0, + /// skipped_frames: 0, + /// }, + /// }; + /// let session = ReplaySession::::new(replay)?; + /// assert_eq!(session.replay().num_players, 2); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + #[must_use] + pub fn replay(&self) -> &Replay { + &self.replay + } + + /// Returns the local player handle. + /// + /// Replay sessions do not have a local player, so this always returns a + /// "not supported" error. + /// + /// # Errors + /// + /// Always returns [`InvalidRequestKind::NotSupported`]. + #[must_use = "returns the local player handle which should be used"] + pub fn local_player_handle_required(&self) -> FortressResult { + Err(InvalidRequestKind::NotSupported { + operation: "local_player_handle_required", + } + .into()) + } + + /// Adds local input for the given player. + /// + /// Replay sessions play back pre-recorded data, so this always returns a + /// "not supported" error. + /// + /// # Errors + /// + /// Always returns [`InvalidRequestKind::NotSupported`]. + #[must_use = "error should be handled"] + pub fn add_local_input( + &mut self, + _player_handle: PlayerHandle, + _input: T::Input, + ) -> FortressResult<()> { + Err(InvalidRequestKind::NotSupported { + operation: "add_local_input", + } + .into()) + } + + /// Returns all events that happened since last queried for events. + /// + /// When replay validation is enabled and playback has completed, this method + /// also flushes any final pending checksum validation that is ready. This makes + /// last-frame desyncs observable for the common `while !is_complete()` loop + /// pattern without requiring an additional failing `advance_frame()` call. + #[must_use = "events should be handled to react to session state changes"] + pub fn events(&mut self) -> EventDrain<'_, T> { + if self.is_complete() && self.pending_validation_ready_to_check() { + self.check_pending_validation(); + } + EventDrain::from_drain(self.event_queue.drain(..)) + } + + /// Returns the current session state. + /// + /// Always returns [`SessionState::Running`] since replay sessions + /// do not require synchronization. + #[must_use] + pub fn current_state(&self) -> SessionState { + SessionState::Running + } + + /// Advances the replay by one frame, returning the recorded inputs. + /// + /// Returns a single [`FortressRequest::AdvanceFrame`] containing the + /// confirmed inputs for the next frame. Returns an error if there are + /// no more frames to play back. + /// + /// # Errors + /// + /// Returns [`FortressError::InvalidFrameStructured`] if the replay + /// has been fully played back. + /// + /// # Example + /// + /// ``` + /// # use fortress_rollback::replay::{Replay, ReplayMetadata}; + /// # use fortress_rollback::sessions::replay_session::ReplaySession; + /// # use fortress_rollback::{Config, Session, FortressRequest}; + /// # use serde::{Deserialize, Serialize}; + /// # use std::net::SocketAddr; + /// # #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] + /// # struct Input(u8); + /// # #[derive(Debug)] + /// # struct Cfg; + /// # impl Config for Cfg { + /// # type Input = Input; + /// # type State = (); + /// # type Address = SocketAddr; + /// # } + /// let replay = Replay { + /// num_players: 1, + /// frames: vec![vec![Input(42)]], + /// checksums: vec![None], + /// metadata: ReplayMetadata { + /// library_version: String::new(), + /// num_players: 1, + /// total_frames: 1, + /// skipped_frames: 0, + /// }, + /// }; + /// let mut session = ReplaySession::::new(replay)?; + /// let requests = session.advance_frame()?; + /// assert_eq!(requests.len(), 1); + /// # Ok::<(), fortress_rollback::FortressError>(()) + /// ``` + #[must_use = "FortressRequests must be processed to advance the game state"] + pub fn advance_frame(&mut self) -> FortressResult> { + // Always check pending validation from the previous frame first, + // even if the replay is exhausted. This ensures the last frame's + // checksum is validated when the caller either makes the final + // advance_frame() call (which returns an error after validation runs) + // or drains events() after completion. + self.check_pending_validation(); + + let next_frame = self.current_frame.next()?; + let frame_index = next_frame.try_as_usize()?; + + let frame_inputs = + self.replay + .frames + .get(frame_index) + .ok_or(FortressError::InvalidFrameStructured { + frame: next_frame, + reason: crate::InvalidFrameReason::ReplayExhausted { + last_frame: self.current_frame, + }, + })?; + + let mut inputs = InputVec::with_capacity(frame_inputs.len()); + for input in frame_inputs { + inputs.push((*input, InputStatus::Confirmed)); + } + + self.current_frame = next_frame; + + let mut requests = RequestVec::new(); + + if self.validate_checksums { + let cell = GameStateCell::::default(); + requests.push(FortressRequest::SaveGameState { + cell: cell.clone(), + frame: next_frame, + }); + self.pending_validation = Some((next_frame, cell)); + } + + requests.push(FortressRequest::AdvanceFrame { inputs }); + Ok(requests) + } +} + +impl Session for ReplaySession { + fn advance_frame(&mut self) -> FortressResult> { + Self::advance_frame(self) + } + + fn local_player_handle_required(&self) -> FortressResult { + Self::local_player_handle_required(self) + } + + fn add_local_input( + &mut self, + player_handle: PlayerHandle, + input: T::Input, + ) -> FortressResult<()> { + Self::add_local_input(self, player_handle, input) + } + + fn events(&mut self) -> EventDrain<'_, T> { + Self::events(self) + } + + fn current_state(&self) -> SessionState { + Self::current_state(self) + } +} + +impl fmt::Debug for ReplaySession { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + f.debug_struct("ReplaySession") + .field("current_frame", &self.current_frame) + .field("total_frames", &self.replay.total_frames()) + .field("is_complete", &self.is_complete()) + .field("num_players", &self.replay.num_players) + .field("validate_checksums", &self.validate_checksums) + .field( + "pending_validation_frame", + &self.pending_validation.as_ref().map(|(frame, _)| *frame), + ) + .field( + "pending_validation_ready", + &self.pending_validation_ready_to_check(), + ) + .finish_non_exhaustive() + } +} + +#[cfg(test)] +#[allow( + clippy::panic, + clippy::unwrap_used, + clippy::expect_used, + clippy::indexing_slicing +)] +mod tests { + use super::*; + use crate::replay::ReplayMetadata; + use std::net::SocketAddr; + + #[derive(Debug, Clone, Copy, PartialEq, Eq)] + struct TestConfig; + + impl Config for TestConfig { + type Input = u8; + type State = Vec; + type Address = SocketAddr; + } + + fn make_replay(num_frames: usize, num_players: usize) -> Replay { + let frames: Vec> = (0..num_frames) + .map(|f| { + (0..num_players) + .map(|p| (f * num_players + p) as u8) + .collect() + }) + .collect(); + Replay { + num_players, + frames, + checksums: vec![None; num_frames], + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players, + total_frames: num_frames, + skipped_frames: 0, + }, + } + } + + #[test] + fn new_session_starts_at_null_frame() { + let session = ReplaySession::::new(make_replay(5, 2)).unwrap(); + assert_eq!(session.current_frame(), Frame::NULL); + assert!(!session.is_complete()); + } + + #[test] + fn advance_frame_returns_correct_inputs() { + let mut session = ReplaySession::::new(make_replay(3, 2)).unwrap(); + + // Frame 0 + let requests = session.advance_frame().unwrap(); + assert_eq!(requests.len(), 1); + match &requests[0] { + FortressRequest::AdvanceFrame { inputs } => { + assert_eq!(inputs.len(), 2); + assert_eq!(inputs[0], (0, InputStatus::Confirmed)); + assert_eq!(inputs[1], (1, InputStatus::Confirmed)); + }, + _ => panic!("Expected AdvanceFrame request"), + } + assert_eq!(session.current_frame(), Frame::new(0)); + + // Frame 1 + let requests = session.advance_frame().unwrap(); + match &requests[0] { + FortressRequest::AdvanceFrame { inputs } => { + assert_eq!(inputs[0], (2, InputStatus::Confirmed)); + assert_eq!(inputs[1], (3, InputStatus::Confirmed)); + }, + _ => panic!("Expected AdvanceFrame request"), + } + assert_eq!(session.current_frame(), Frame::new(1)); + } + + #[test] + fn advance_past_end_returns_replay_exhausted() { + let mut session = ReplaySession::::new(make_replay(1, 1)).unwrap(); + session.advance_frame().unwrap(); + assert!(session.is_complete()); + + let result = session.advance_frame(); + match result { + Err(FortressError::InvalidFrameStructured { frame, reason }) => { + assert_eq!(frame, Frame::new(1)); + assert!( + matches!(reason, crate::InvalidFrameReason::ReplayExhausted { last_frame } if last_frame == Frame::new(0)), + "Expected ReplayExhausted, got {reason:?}" + ); + }, + other => panic!("Expected InvalidFrameStructured with ReplayExhausted, got {other:?}"), + } + } + + #[test] + fn is_complete_empty_replay() { + let session = ReplaySession::::new(make_replay(0, 1)).unwrap(); + assert!(session.is_complete()); + } + + #[test] + fn is_complete_after_all_frames() { + let mut session = ReplaySession::::new(make_replay(3, 1)).unwrap(); + for _ in 0..3 { + assert!(!session.is_complete()); + session.advance_frame().unwrap(); + } + assert!(session.is_complete()); + } + + #[test] + fn total_frames_matches_replay() { + let session = ReplaySession::::new(make_replay(42, 2)).unwrap(); + assert_eq!(session.total_frames(), 42); + } + + #[test] + fn local_player_handle_required_not_supported() { + let session = ReplaySession::::new(make_replay(1, 1)).unwrap(); + let result = session.local_player_handle_required(); + assert!(result.is_err()); + } + + #[test] + fn add_local_input_not_supported() { + let mut session = ReplaySession::::new(make_replay(1, 1)).unwrap(); + let result = session.add_local_input(PlayerHandle::new(0), 42); + assert!(result.is_err()); + } + + #[test] + fn events_returns_empty_drain() { + let mut session = ReplaySession::::new(make_replay(1, 1)).unwrap(); + assert!(session.events().next().is_none()); + } + + #[test] + fn current_state_always_running() { + let session = ReplaySession::::new(make_replay(1, 1)).unwrap(); + assert_eq!(session.current_state(), SessionState::Running); + } + + #[test] + fn replay_accessor() { + let session = ReplaySession::::new(make_replay(5, 3)).unwrap(); + assert_eq!(session.replay().num_players, 3); + assert_eq!(session.replay().total_frames(), 5); + } + + #[test] + fn debug_format() { + let session = ReplaySession::::new(make_replay(10, 2)).unwrap(); + let debug_str = format!("{:?}", session); + assert!(debug_str.contains("ReplaySession")); + assert!(debug_str.contains("total_frames")); + } + + #[test] + fn session_trait_advance_frame() { + let mut session: Box> = + Box::new(ReplaySession::::new(make_replay(2, 1)).unwrap()); + let requests = session.advance_frame().unwrap(); + assert_eq!(requests.len(), 1); + } + + #[test] + fn full_playback_single_player() { + let num_frames = 10; + let mut session = ReplaySession::::new(make_replay(num_frames, 1)).unwrap(); + + for expected_frame in 0..num_frames { + let requests = session.advance_frame().unwrap(); + assert_eq!(requests.len(), 1); + match &requests[0] { + FortressRequest::AdvanceFrame { inputs } => { + assert_eq!(inputs.len(), 1); + assert_eq!(inputs[0].0, expected_frame as u8); + assert_eq!(inputs[0].1, InputStatus::Confirmed); + }, + _ => panic!("Expected AdvanceFrame"), + } + assert_eq!(session.current_frame(), Frame::new(expected_frame as i32)); + } + assert!(session.is_complete()); + } + + fn make_replay_with_checksums( + num_frames: usize, + num_players: usize, + checksums: Vec>, + ) -> Replay { + let frames: Vec> = (0..num_frames) + .map(|f| { + (0..num_players) + .map(|p| (f * num_players + p) as u8) + .collect() + }) + .collect(); + Replay { + num_players, + frames, + checksums, + metadata: ReplayMetadata { + library_version: "test".to_string(), + num_players, + total_frames: num_frames, + skipped_frames: 0, + }, + } + } + + #[test] + fn validation_mode_emits_save_game_state() { + let replay = make_replay_with_checksums(3, 1, vec![Some(100), Some(200), Some(300)]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + let requests = session.advance_frame().unwrap(); + assert_eq!(requests.len(), 2); + assert!( + matches!(&requests[0], FortressRequest::SaveGameState { frame, .. } if *frame == Frame::new(0)) + ); + assert!(matches!(&requests[1], FortressRequest::AdvanceFrame { .. })); + } + + #[test] + fn validation_mode_detects_checksum_mismatch() { + let replay = make_replay_with_checksums(2, 1, vec![Some(0xAAAA), Some(0xBBBB)]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + // Frame 0: get SaveGameState, fill with wrong checksum + let requests = session.advance_frame().unwrap(); + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + cell.save(*frame, Some(vec![1u8]), Some(0xDEAD)); + } else { + panic!("Expected SaveGameState"); + } + + // Frame 1: triggers validation of frame 0 + let _requests = session.advance_frame().unwrap(); + + // Check for desync event + let events: Vec<_> = session.events().collect(); + assert_eq!(events.len(), 1); + match &events[0] { + FortressEvent::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } => { + assert_eq!(*frame, Frame::new(0)); + assert_eq!(*expected_checksum, 0xAAAA); + assert_eq!(*actual_checksum, 0xDEAD); + }, + other => panic!("Expected ReplayDesync, got {:?}", other), + } + } + + #[test] + fn validation_mode_no_event_on_matching_checksums() { + let replay = make_replay_with_checksums(2, 1, vec![Some(0x1234), Some(0x5678)]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + // Frame 0: fill with matching checksum + let requests = session.advance_frame().unwrap(); + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + cell.save(*frame, Some(vec![1u8]), Some(0x1234)); + } + + // Frame 1: triggers validation of frame 0 + let _requests = session.advance_frame().unwrap(); + + // No desync events should be emitted + assert!(session.events().next().is_none()); + } + + #[test] + fn validation_mode_skips_frames_without_checksums() { + let replay = make_replay_with_checksums(2, 1, vec![None, None]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + // Frame 0: fill with a checksum (but replay has None) + let requests = session.advance_frame().unwrap(); + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + cell.save(*frame, Some(vec![1u8]), Some(0xBEEF)); + } + + // Frame 1: triggers validation of frame 0, but replay checksum is None + let _requests = session.advance_frame().unwrap(); + + // No desync events -- replay has no checksum to compare against + assert!(session.events().next().is_none()); + } + + #[test] + fn non_validation_mode_no_save_requests() { + let replay = make_replay_with_checksums(3, 1, vec![Some(100), Some(200), Some(300)]); + let mut session = ReplaySession::::new(replay).unwrap(); + + for _ in 0..3 { + let requests = session.advance_frame().unwrap(); + assert_eq!(requests.len(), 1); + assert!(matches!(&requests[0], FortressRequest::AdvanceFrame { .. })); + } + assert!(session.events().next().is_none()); + } + + #[test] + fn validation_mode_skips_when_actual_checksum_is_none() { + let replay = make_replay_with_checksums(2, 1, vec![Some(0x1234), Some(0x5678)]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + // Frame 0: save without providing a checksum + let requests = session.advance_frame().unwrap(); + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + cell.save(*frame, Some(vec![1u8]), None); + } + + // Frame 1: triggers validation of frame 0, but actual checksum is None + let _requests = session.advance_frame().unwrap(); + + // No desync events -- actual checksum is None + assert!(session.events().next().is_none()); + } + + #[test] + fn validation_mode_naive_completion_loop_surfaces_final_frame_desync_on_events_drain() { + let replay = make_replay_with_checksums(2, 1, vec![Some(0xAAAA), Some(0xBBBB)]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + // Common usage pattern: stop when complete, then drain events. + while !session.is_complete() { + let requests = session.advance_frame().unwrap(); + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + let checksum = if *frame == Frame::new(1) { + 0xDEAD + } else { + 0xAAAA + }; + cell.save(*frame, Some(vec![1u8]), Some(checksum)); + } else { + panic!("Expected SaveGameState"); + } + } + + let events: Vec<_> = session.events().collect(); + assert_eq!(events.len(), 1); + match &events[0] { + FortressEvent::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } => { + assert_eq!(*frame, Frame::new(1)); + assert_eq!(*expected_checksum, 0xBBBB); + assert_eq!(*actual_checksum, 0xDEAD); + }, + other => panic!("Expected ReplayDesync, got {:?}", other), + } + + // No duplicate desync should be emitted on a later exhausted advance. + let result = session.advance_frame(); + assert!(result.is_err()); + assert!(session.events().next().is_none()); + } + + #[test] + fn validation_mode_naive_completion_loop_flushes_last_frame_across_lengths() { + for num_frames in [1usize, 2, 4, 8] { + let mut checksums = vec![Some(0x1000); num_frames]; + checksums[num_frames - 1] = Some(0xCAFE); + + let replay = make_replay_with_checksums(num_frames, 1, checksums); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + while !session.is_complete() { + let requests = session.advance_frame().unwrap(); + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + let checksum = if *frame == Frame::new((num_frames - 1) as i32) { + 0xBAD + } else { + 0x1000 + }; + cell.save(*frame, Some(vec![1u8]), Some(checksum)); + } else { + panic!("Expected SaveGameState"); + } + } + + let events: Vec<_> = session.events().collect(); + assert_eq!(events.len(), 1, "num_frames={num_frames}"); + match &events[0] { + FortressEvent::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } => { + assert_eq!(*frame, Frame::new((num_frames - 1) as i32)); + assert_eq!(*expected_checksum, 0xCAFE); + assert_eq!(*actual_checksum, 0xBAD); + }, + other => panic!("Expected ReplayDesync, got {:?}", other), + } + + // Draining again should not duplicate events. + assert!(session.events().next().is_none()); + } + } + + #[test] + fn validation_mode_events_before_checksum_write_does_not_drop_pending_validation() { + let replay = make_replay_with_checksums(1, 1, vec![Some(0xCAFE)]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + let requests = session.advance_frame().unwrap(); + assert!(session.is_complete()); + + // Calling events before the app writes checksum should not consume pending validation. + assert!(session.events().next().is_none()); + + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + cell.save(*frame, Some(vec![1u8]), Some(0xBAD)); + } else { + panic!("Expected SaveGameState"); + } + + let events: Vec<_> = session.events().collect(); + assert_eq!(events.len(), 1); + match &events[0] { + FortressEvent::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } => { + assert_eq!(*frame, Frame::new(0)); + assert_eq!(*expected_checksum, 0xCAFE); + assert_eq!(*actual_checksum, 0xBAD); + }, + other => panic!("Expected ReplayDesync, got {:?}", other), + } + } + + #[test] + fn single_frame_replay_validation_detects_last_frame_desync() { + // MAJOR #4: Single-frame replay with validation. + // The last (and only) frame's checksum must be validated when + // the user calls advance_frame() again (which returns an error). + let replay = make_replay_with_checksums(1, 1, vec![Some(0xCAFE)]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + assert!(!session.is_complete()); + + // Frame 0: get SaveGameState + AdvanceFrame + let requests = session.advance_frame().unwrap(); + assert_eq!(requests.len(), 2); + assert!(matches!( + &requests[0], + FortressRequest::SaveGameState { .. } + )); + assert!(matches!(&requests[1], FortressRequest::AdvanceFrame { .. })); + + // Fill with a mismatched checksum + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + cell.save(*frame, Some(vec![1u8]), Some(0xBAD)); + } + assert!(session.is_complete()); + + // Next advance_frame() should return error (no more frames), + // but first it validates the pending checksum from frame 0. + let result = session.advance_frame(); + assert!(result.is_err()); + + // The desync event should be available + let events: Vec<_> = session.events().collect(); + assert_eq!(events.len(), 1); + match &events[0] { + FortressEvent::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } => { + assert_eq!(*frame, Frame::new(0)); + assert_eq!(*expected_checksum, 0xCAFE); + assert_eq!(*actual_checksum, 0xBAD); + }, + other => panic!("Expected ReplayDesync, got {:?}", other), + } + } + + #[test] + fn single_frame_replay_validation_matching_checksum() { + // Verify no desync event when last-frame checksum matches. + let replay = make_replay_with_checksums(1, 1, vec![Some(0xCAFE)]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + let requests = session.advance_frame().unwrap(); + if let FortressRequest::SaveGameState { cell, frame } = &requests[0] { + cell.save(*frame, Some(vec![1u8]), Some(0xCAFE)); + } + + // Next call returns error but no desync event + let result = session.advance_frame(); + assert!(result.is_err()); + assert!(session.events().next().is_none()); + } + + #[test] + fn empty_replay_with_validation() { + // MAJOR #5: 0-frame replay with validation. + let replay = make_replay_with_checksums(0, 1, vec![]); + let session = ReplaySession::::new_with_validation(replay).unwrap(); + assert!(session.is_complete()); + assert!(session.is_validating()); + assert_eq!(session.total_frames(), 0); + assert_eq!(session.current_frame(), Frame::NULL); + } + + #[test] + fn empty_replay_with_validation_advance_returns_error() { + let replay = make_replay_with_checksums(0, 1, vec![]); + let mut session = ReplaySession::::new_with_validation(replay).unwrap(); + + // Advancing past the empty replay should error + let result = session.advance_frame(); + assert!(result.is_err()); + // No events since there were no frames to validate + assert!(session.events().next().is_none()); + } + + #[test] + fn is_validating_returns_correct_value() { + let replay = make_replay(3, 1); + let normal = ReplaySession::::new(replay).unwrap(); + assert!(!normal.is_validating()); + + let replay = make_replay(3, 1); + let validating = ReplaySession::::new_with_validation(replay).unwrap(); + assert!(validating.is_validating()); + } +} diff --git a/src/sessions/session_trait.rs b/src/sessions/session_trait.rs index 9b2fd374..04882349 100644 --- a/src/sessions/session_trait.rs +++ b/src/sessions/session_trait.rs @@ -5,24 +5,24 @@ use crate::{ /// A unified interface for all Fortress Rollback session types. /// /// The `Session` trait provides a common API surface that all session types -/// ([`P2PSession`], [`SpectatorSession`], [`SyncTestSession`]) implement. +/// ([`P2PSession`], [`SpectatorSession`], [`SyncTestSession`], [`ReplaySession`]) implement. /// This enables writing generic code that works with any session type, /// such as a game loop that doesn't care whether it's running a local -/// sync test or a networked P2P match. +/// sync test, a networked P2P match, or a replay playback. /// /// # Method Override Table /// /// Not all session types override every method. Methods not overridden use /// sensible defaults (e.g., returning a "not supported" error or a no-op). /// -/// | Method | [`P2PSession`] | [`SpectatorSession`] | [`SyncTestSession`] | -/// |--------|:-:|:-:|:-:| -/// | [`advance_frame`](Session::advance_frame) | ✅ Override | ✅ Override | ✅ Override | -/// | [`local_player_handle_required`](Session::local_player_handle_required) | ✅ Override | ✅ Override (error) | ✅ Override | -/// | [`add_local_input`](Session::add_local_input) | ✅ Override | ✅ Override (error) | ✅ Override | -/// | [`events`](Session::events) | ✅ Override | ✅ Override | ✅ Override | -/// | [`current_state`](Session::current_state) | ✅ Override | ✅ Override | ❌ Default (`Running`) | -/// | [`poll_remote_clients`](Session::poll_remote_clients) | ✅ Override | ✅ Override | ❌ Default (no-op) | +/// | Method | [`P2PSession`] | [`SpectatorSession`] | [`SyncTestSession`] | [`ReplaySession`] | +/// |--------|:-:|:-:|:-:|:-:| +/// | [`advance_frame`](Session::advance_frame) | Override | Override | Override | Override (emits `SaveGameState` in validation mode) | +/// | [`local_player_handle_required`](Session::local_player_handle_required) | Override | Override (error) | Override | Override (error) | +/// | [`add_local_input`](Session::add_local_input) | Override | Override (error) | Override | Override (error) | +/// | [`events`](Session::events) | Override | Override | Override | Override | +/// | [`current_state`](Session::current_state) | Override | Override | Default (`Running`) | Override (`Running`) | +/// | [`poll_remote_clients`](Session::poll_remote_clients) | Override | Override | Default (no-op) | Default (no-op) | /// /// # Example /// @@ -43,6 +43,7 @@ use crate::{ /// [`P2PSession`]: crate::P2PSession /// [`SpectatorSession`]: crate::SpectatorSession /// [`SyncTestSession`]: crate::SyncTestSession +/// [`ReplaySession`]: crate::ReplaySession pub trait Session { /// Advances the session by one frame, returning any requests the /// application must fulfill (save state, load state, advance game). diff --git a/src/sync_layer/mod.rs b/src/sync_layer/mod.rs index 632077ae..3c90e606 100644 --- a/src/sync_layer/mod.rs +++ b/src/sync_layer/mod.rs @@ -544,6 +544,26 @@ impl SyncLayer { first_incorrect } + /// Returns the player handles that have incorrect predictions at or before `disconnect_frame`. + /// Used by telemetry to identify which players' inputs were mispredicted. + pub(crate) fn players_with_incorrect_predictions( + &self, + disconnect_frame: Frame, + ) -> Vec<(PlayerHandle, Frame)> { + let mut result = Vec::new(); + for handle in 0..self.num_players { + if let Some(queue) = self.input_queues.get(handle) { + let incorrect = queue.first_incorrect_frame(); + if !incorrect.is_null() + && (disconnect_frame.is_null() || incorrect <= disconnect_frame) + { + result.push((PlayerHandle::new(handle), incorrect)); + } + } + } + result + } + /// Returns a gamestate through given frame pub(crate) fn saved_state_by_frame(&self, frame: Frame) -> Option> { let cell = self.saved_states.get_cell(frame).ok()?; @@ -701,7 +721,7 @@ mod sync_layer_tests { use std::net::SocketAddr; #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct TestInput { inp: u8, } @@ -1837,6 +1857,78 @@ mod sync_layer_tests { assert!(sync_layer.current_frame().as_i32() >= 0); } } + + #[test] + fn test_players_with_incorrect_predictions_includes_boundary_frame() { + // Regression test: misprediction exactly at disconnect_frame must be included. + // Previously the filter used `<` instead of `<=`, excluding the boundary. + let mut sync_layer = SyncLayer::::new(2, 8); + + let mut connect_status = vec![ConnectionStatus::default(), ConnectionStatus::default()]; + + // Add confirmed inputs for both players at frame 0 + let input0_p0 = PlayerInput::new(Frame::new(0), TestInput { inp: 5 }); + let input0_p1 = PlayerInput::new(Frame::new(0), TestInput { inp: 5 }); + sync_layer.add_remote_input(PlayerHandle::new(0), input0_p0); + sync_layer.add_remote_input(PlayerHandle::new(1), input0_p1); + connect_status[0].last_frame = Frame::new(0); + connect_status[1].last_frame = Frame::new(0); + + // Advance to frame 1 + sync_layer.advance_frame(); + + // Request synchronized inputs at frame 1. + // Player 0 has no input for frame 1 yet, so it predicts (RepeatLastConfirmed = 5). + // Player 1 also predicts. + let _inputs = sync_layer + .synchronized_inputs(&connect_status) + .expect("synchronized inputs should succeed"); + + // Now add the ACTUAL input for player 1 at frame 1 with a DIFFERENT value, + // triggering a misprediction at frame 1. + let actual_p1 = PlayerInput::new(Frame::new(1), TestInput { inp: 99 }); + sync_layer.add_remote_input(PlayerHandle::new(1), actual_p1); + + // Also add matching input for player 0 (no misprediction) + let actual_p0 = PlayerInput::new(Frame::new(1), TestInput { inp: 5 }); + sync_layer.add_remote_input(PlayerHandle::new(0), actual_p0); + + // Boundary case: disconnect_frame == misprediction frame (frame 1). + // The doc says "at or before", so this must be included. + let result = sync_layer.players_with_incorrect_predictions(Frame::new(1)); + assert_eq!( + result.len(), + 1, + "misprediction at disconnect_frame must be included" + ); + assert_eq!(result[0].0, PlayerHandle::new(1)); + assert_eq!(result[0].1, Frame::new(1)); + + // disconnect_frame strictly after the misprediction includes it + let result_after = sync_layer.players_with_incorrect_predictions(Frame::new(2)); + assert_eq!( + result_after.len(), + 1, + "misprediction before disconnect_frame must be included" + ); + assert_eq!(result_after[0].0, PlayerHandle::new(1)); + assert_eq!(result_after[0].1, Frame::new(1)); + + // Sanity: frame BEFORE the misprediction excludes it + let result_before = sync_layer.players_with_incorrect_predictions(Frame::new(0)); + assert!( + result_before.is_empty(), + "misprediction after disconnect_frame must be excluded" + ); + + // Sanity: null disconnect_frame includes everything + let result_null = sync_layer.players_with_incorrect_predictions(Frame::NULL); + assert_eq!( + result_null.len(), + 1, + "null disconnect_frame should include all mispredictions" + ); + } } // ################### @@ -1880,7 +1972,7 @@ mod kani_sync_layer_proofs { use std::net::SocketAddr; #[repr(C)] - #[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] + #[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct TestInput { inp: u8, } diff --git a/src/telemetry.rs b/src/telemetry.rs index 099ddde1..eaefd49d 100644 --- a/src/telemetry.rs +++ b/src/telemetry.rs @@ -21,6 +21,7 @@ //! assert!(observer.violations().is_empty(), "unexpected violations"); //! ``` +use crate::network::network_stats::NetworkStats; use crate::sync::Mutex; use crate::{Frame, PlayerHandle}; use std::collections::BTreeMap; @@ -1411,6 +1412,413 @@ macro_rules! try_check_invariants { }}; } +// ========================================== +// Session Telemetry +// ========================================== + +/// Observer for session performance telemetry. +/// +/// Follows the same pattern as [`ViolationObserver`] — implement this trait +/// and pass it via [`SessionBuilder::with_telemetry()`] to receive structured +/// performance data during a P2P session. +/// +/// All methods have default no-op implementations. Override only the events +/// you care about. +/// +/// # Example +/// +/// ``` +/// use fortress_rollback::telemetry::SessionTelemetry; +/// use fortress_rollback::{Frame, PlayerHandle}; +/// use fortress_rollback::NetworkStats; +/// +/// struct MyTelemetry; +/// +/// impl SessionTelemetry for MyTelemetry { +/// fn on_rollback(&self, depth: usize, frame: Frame) { +/// println!("Rollback of {depth} frames at {frame}"); +/// } +/// } +/// ``` +/// +/// [`SessionBuilder::with_telemetry()`]: crate::sessions::builder::SessionBuilder::with_telemetry +#[cfg(feature = "sync-send")] +pub trait SessionTelemetry: Send + Sync { + /// Called when a rollback occurs. + /// + /// `depth` is the number of frames rolled back, `frame` is the frame + /// that was loaded (the target of the rollback). + fn on_rollback(&self, depth: usize, frame: Frame) { + let _ = (depth, frame); + } + + /// Called when a predicted input turns out to be wrong for a player. + fn on_prediction_miss(&self, player: PlayerHandle, frame: Frame) { + let _ = (player, frame); + } + + /// Called periodically with network statistics for a peer. + fn on_network_stats(&self, player: PlayerHandle, stats: &NetworkStats) { + let _ = (player, stats); + } + + /// Called each time the session advances a frame. + fn on_frame_advance(&self, frame: Frame) { + let _ = frame; + } +} + +/// Observer for session performance telemetry. +/// +/// Follows the same pattern as [`ViolationObserver`] — implement this trait +/// and pass it via [`SessionBuilder::with_telemetry()`] to receive structured +/// performance data during a P2P session. +/// +/// All methods have default no-op implementations. Override only the events +/// you care about. +/// +/// [`SessionBuilder::with_telemetry()`]: crate::sessions::builder::SessionBuilder::with_telemetry +#[cfg(not(feature = "sync-send"))] +pub trait SessionTelemetry { + /// Called when a rollback occurs. + /// + /// `depth` is the number of frames rolled back, `frame` is the frame + /// that was loaded (the target of the rollback). + fn on_rollback(&self, depth: usize, frame: Frame) { + let _ = (depth, frame); + } + + /// Called when a predicted input turns out to be wrong for a player. + fn on_prediction_miss(&self, player: PlayerHandle, frame: Frame) { + let _ = (player, frame); + } + + /// Called periodically with network statistics for a peer. + fn on_network_stats(&self, player: PlayerHandle, stats: &NetworkStats) { + let _ = (player, stats); + } + + /// Called each time the session advances a frame. + fn on_frame_advance(&self, frame: Frame) { + let _ = frame; + } +} + +/// Structured telemetry event for collecting and inspecting. +/// +/// Each variant corresponds to a method on [`SessionTelemetry`], capturing +/// all the arguments for later inspection. +/// +/// # Example +/// +/// ``` +/// use fortress_rollback::telemetry::{TelemetryEvent, CollectingTelemetry, SessionTelemetry}; +/// use fortress_rollback::Frame; +/// +/// let telemetry = CollectingTelemetry::new(); +/// telemetry.on_frame_advance(Frame::new(10)); +/// +/// let events = telemetry.events(); +/// assert_eq!(events.len(), 1); +/// ``` +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum TelemetryEvent { + /// A rollback occurred. + Rollback { + /// Number of frames rolled back. + depth: usize, + /// The frame that was loaded. + frame: Frame, + }, + /// A predicted input was incorrect. + PredictionMiss { + /// The player whose prediction was wrong. + player: PlayerHandle, + /// The frame at which the misprediction occurred. + frame: Frame, + }, + /// Network statistics received for a peer. + NetworkStatsUpdate { + /// The player the stats are for. + player: PlayerHandle, + /// The network statistics snapshot. + stats: NetworkStats, + }, + /// A frame was advanced. + FrameAdvance { + /// The frame that was just advanced to. + frame: Frame, + }, +} + +impl std::fmt::Display for TelemetryEvent { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + Self::Rollback { depth, frame } => { + write!(f, "Rollback({depth} frames to {frame})") + }, + Self::PredictionMiss { player, frame } => { + write!(f, "PredictionMiss(player={player}, frame={frame})") + }, + Self::NetworkStatsUpdate { player, stats } => { + write!(f, "NetworkStatsUpdate(player={player}, {stats})") + }, + Self::FrameAdvance { frame } => write!(f, "FrameAdvance({frame})"), + } + } +} + +/// Built-in telemetry observer that collects events for testing. +/// +/// This observer stores all telemetry events in a thread-safe vector, +/// allowing tests to assert on events that occurred during a session. +/// +/// # Example +/// +/// ``` +/// use fortress_rollback::telemetry::{CollectingTelemetry, SessionTelemetry}; +/// use fortress_rollback::Frame; +/// +/// let telemetry = CollectingTelemetry::new(); +/// +/// // Simulate telemetry events +/// telemetry.on_rollback(3, Frame::new(10)); +/// telemetry.on_frame_advance(Frame::new(13)); +/// +/// assert_eq!(telemetry.len(), 2); +/// assert_eq!(telemetry.rollbacks().len(), 1); +/// ``` +#[derive(Debug, Default)] +pub struct CollectingTelemetry { + events: Mutex>, +} + +impl CollectingTelemetry { + /// Creates a new collecting telemetry observer with an empty event list. + #[must_use] + pub fn new() -> Self { + Self { + events: Mutex::new(Vec::new()), + } + } + + /// Returns a copy of all collected events. + #[cfg(not(loom))] + #[must_use] + pub fn events(&self) -> Vec { + self.events.lock().clone() + } + + /// Returns a copy of all collected events (loom version). + #[cfg(loom)] + #[must_use] + pub fn events(&self) -> Vec { + self.events.lock().unwrap().clone() + } + + /// Returns the number of collected events. + #[cfg(not(loom))] + #[must_use] + pub fn len(&self) -> usize { + self.events.lock().len() + } + + /// Returns the number of collected events (loom version). + #[cfg(loom)] + #[must_use] + pub fn len(&self) -> usize { + self.events.lock().unwrap().len() + } + + /// Returns true if no events have been collected. + #[cfg(not(loom))] + #[must_use] + pub fn is_empty(&self) -> bool { + self.events.lock().is_empty() + } + + /// Returns true if no events have been collected (loom version). + #[cfg(loom)] + #[must_use] + pub fn is_empty(&self) -> bool { + self.events.lock().unwrap().is_empty() + } + + /// Returns all rollback events. + #[cfg(not(loom))] + #[must_use] + pub fn rollbacks(&self) -> Vec { + self.events + .lock() + .iter() + .filter(|e| matches!(e, TelemetryEvent::Rollback { .. })) + .copied() + .collect() + } + + /// Returns all rollback events (loom version). + #[cfg(loom)] + #[must_use] + pub fn rollbacks(&self) -> Vec { + self.events + .lock() + .unwrap() + .iter() + .filter(|e| matches!(e, TelemetryEvent::Rollback { .. })) + .copied() + .collect() + } + + /// Returns all prediction miss events. + #[cfg(not(loom))] + #[must_use] + pub fn prediction_misses(&self) -> Vec { + self.events + .lock() + .iter() + .filter(|e| matches!(e, TelemetryEvent::PredictionMiss { .. })) + .copied() + .collect() + } + + /// Returns all prediction miss events (loom version). + #[cfg(loom)] + #[must_use] + pub fn prediction_misses(&self) -> Vec { + self.events + .lock() + .unwrap() + .iter() + .filter(|e| matches!(e, TelemetryEvent::PredictionMiss { .. })) + .copied() + .collect() + } + + /// Returns all network stats update events. + #[cfg(not(loom))] + #[must_use] + pub fn network_stats_updates(&self) -> Vec { + self.events + .lock() + .iter() + .filter(|e| matches!(e, TelemetryEvent::NetworkStatsUpdate { .. })) + .copied() + .collect() + } + + /// Returns all network stats update events (loom version). + #[cfg(loom)] + #[must_use] + pub fn network_stats_updates(&self) -> Vec { + self.events + .lock() + .unwrap() + .iter() + .filter(|e| matches!(e, TelemetryEvent::NetworkStatsUpdate { .. })) + .copied() + .collect() + } + + /// Returns all frame advance events. + #[cfg(not(loom))] + #[must_use] + pub fn frame_advances(&self) -> Vec { + self.events + .lock() + .iter() + .filter(|e| matches!(e, TelemetryEvent::FrameAdvance { .. })) + .copied() + .collect() + } + + /// Returns all frame advance events (loom version). + #[cfg(loom)] + #[must_use] + pub fn frame_advances(&self) -> Vec { + self.events + .lock() + .unwrap() + .iter() + .filter(|e| matches!(e, TelemetryEvent::FrameAdvance { .. })) + .copied() + .collect() + } + + /// Clears all collected events. + #[cfg(not(loom))] + pub fn clear(&self) { + self.events.lock().clear(); + } + + /// Clears all collected events (loom version). + #[cfg(loom)] + pub fn clear(&self) { + self.events.lock().unwrap().clear(); + } +} + +#[cfg(not(loom))] +impl SessionTelemetry for CollectingTelemetry { + fn on_rollback(&self, depth: usize, frame: Frame) { + self.events + .lock() + .push(TelemetryEvent::Rollback { depth, frame }); + } + + fn on_prediction_miss(&self, player: PlayerHandle, frame: Frame) { + self.events + .lock() + .push(TelemetryEvent::PredictionMiss { player, frame }); + } + + fn on_network_stats(&self, player: PlayerHandle, stats: &NetworkStats) { + self.events.lock().push(TelemetryEvent::NetworkStatsUpdate { + player, + stats: *stats, + }); + } + + fn on_frame_advance(&self, frame: Frame) { + self.events + .lock() + .push(TelemetryEvent::FrameAdvance { frame }); + } +} + +#[cfg(loom)] +impl SessionTelemetry for CollectingTelemetry { + fn on_rollback(&self, depth: usize, frame: Frame) { + self.events + .lock() + .unwrap() + .push(TelemetryEvent::Rollback { depth, frame }); + } + + fn on_prediction_miss(&self, player: PlayerHandle, frame: Frame) { + self.events + .lock() + .unwrap() + .push(TelemetryEvent::PredictionMiss { player, frame }); + } + + fn on_network_stats(&self, player: PlayerHandle, stats: &NetworkStats) { + self.events + .lock() + .unwrap() + .push(TelemetryEvent::NetworkStatsUpdate { + player, + stats: *stats, + }); + } + + fn on_frame_advance(&self, frame: Frame) { + self.events + .lock() + .unwrap() + .push(TelemetryEvent::FrameAdvance { frame }); + } +} + #[cfg(test)] #[allow( clippy::panic, @@ -2424,4 +2832,164 @@ mod tests { assert_eq!(TracingObserver::format_frame(Some(Frame::new(100))), "100"); assert_eq!(TracingObserver::format_frame(Some(Frame::new(-5))), "-5"); } + + // ========================================== + // SessionTelemetry Tests + // ========================================== + + #[test] + fn collecting_telemetry_records_events() { + let telemetry = CollectingTelemetry::new(); + + telemetry.on_rollback(3, Frame::new(10)); + telemetry.on_prediction_miss(PlayerHandle::new(1), Frame::new(5)); + telemetry.on_frame_advance(Frame::new(13)); + telemetry.on_network_stats(PlayerHandle::new(0), &NetworkStats::default()); + + let events = telemetry.events(); + assert_eq!(events.len(), 4); + assert!(matches!( + events[0], + TelemetryEvent::Rollback { + depth: 3, + frame + } if frame == Frame::new(10) + )); + assert!(matches!( + events[1], + TelemetryEvent::PredictionMiss { player, frame } + if player == PlayerHandle::new(1) && frame == Frame::new(5) + )); + assert!( + matches!(events[2], TelemetryEvent::FrameAdvance { frame } if frame == Frame::new(13)) + ); + assert!(matches!( + events[3], + TelemetryEvent::NetworkStatsUpdate { player, .. } + if player == PlayerHandle::new(0) + )); + } + + #[test] + fn collecting_telemetry_rollbacks_filter() { + let telemetry = CollectingTelemetry::new(); + + telemetry.on_rollback(2, Frame::new(5)); + telemetry.on_frame_advance(Frame::new(7)); + telemetry.on_rollback(1, Frame::new(8)); + telemetry.on_prediction_miss(PlayerHandle::new(0), Frame::new(5)); + + let rollbacks = telemetry.rollbacks(); + assert_eq!(rollbacks.len(), 2); + assert!(matches!( + rollbacks[0], + TelemetryEvent::Rollback { depth: 2, .. } + )); + assert!(matches!( + rollbacks[1], + TelemetryEvent::Rollback { depth: 1, .. } + )); + } + + #[test] + fn collecting_telemetry_prediction_misses_filter() { + let telemetry = CollectingTelemetry::new(); + + telemetry.on_rollback(2, Frame::new(5)); + telemetry.on_prediction_miss(PlayerHandle::new(0), Frame::new(3)); + telemetry.on_frame_advance(Frame::new(7)); + telemetry.on_prediction_miss(PlayerHandle::new(1), Frame::new(4)); + + let misses = telemetry.prediction_misses(); + assert_eq!(misses.len(), 2); + assert!(matches!(misses[0], TelemetryEvent::PredictionMiss { .. })); + assert!(matches!(misses[1], TelemetryEvent::PredictionMiss { .. })); + } + + #[test] + fn collecting_telemetry_clear_removes_all() { + let telemetry = CollectingTelemetry::new(); + + telemetry.on_rollback(1, Frame::new(1)); + telemetry.on_frame_advance(Frame::new(2)); + assert_eq!(telemetry.len(), 2); + assert!(!telemetry.is_empty()); + + telemetry.clear(); + assert_eq!(telemetry.len(), 0); + assert!(telemetry.is_empty()); + assert!(telemetry.events().is_empty()); + } + + #[test] + fn session_telemetry_default_methods_are_noop() { + // A blank implementation should compile and not panic + struct NoOpTelemetry; + impl SessionTelemetry for NoOpTelemetry {} + + let t = NoOpTelemetry; + t.on_rollback(5, Frame::new(10)); + t.on_prediction_miss(PlayerHandle::new(0), Frame::new(3)); + t.on_network_stats(PlayerHandle::new(1), &NetworkStats::default()); + t.on_frame_advance(Frame::new(42)); + // If we get here without panicking, the test passes + } + + #[test] + fn collecting_telemetry_new_is_empty() { + let telemetry = CollectingTelemetry::new(); + assert!(telemetry.is_empty()); + assert_eq!(telemetry.len(), 0); + assert!(telemetry.events().is_empty()); + assert!(telemetry.rollbacks().is_empty()); + assert!(telemetry.prediction_misses().is_empty()); + } + + #[test] + fn telemetry_event_display_all_variants() { + let rollback = TelemetryEvent::Rollback { + depth: 5, + frame: Frame::new(100), + }; + assert!(format!("{rollback}").contains('5')); + assert!(format!("{rollback}").contains("100")); + + let miss = TelemetryEvent::PredictionMiss { + player: PlayerHandle::new(1), + frame: Frame::new(42), + }; + let miss_str = format!("{miss}"); + assert!(miss_str.contains("42")); + + let advance = TelemetryEvent::FrameAdvance { + frame: Frame::new(200), + }; + assert!(format!("{advance}").contains("200")); + + let stats = TelemetryEvent::NetworkStatsUpdate { + player: PlayerHandle::new(0), + stats: NetworkStats::default(), + }; + let stats_str = format!("{stats}"); + assert!(stats_str.contains("NetworkStatsUpdate")); + } + + #[test] + fn collecting_telemetry_network_stats_and_frame_advance_filters() { + let telemetry = CollectingTelemetry::new(); + + // Add mixed events + telemetry.on_rollback(3, Frame::new(10)); + telemetry.on_frame_advance(Frame::new(11)); + telemetry.on_network_stats(PlayerHandle::new(0), &NetworkStats::default()); + telemetry.on_frame_advance(Frame::new(12)); + telemetry.on_prediction_miss(PlayerHandle::new(1), Frame::new(5)); + telemetry.on_network_stats(PlayerHandle::new(1), &NetworkStats::default()); + + assert_eq!(telemetry.len(), 6); + assert_eq!(telemetry.network_stats_updates().len(), 2); + assert_eq!(telemetry.frame_advances().len(), 2); + assert_eq!(telemetry.rollbacks().len(), 1); + assert_eq!(telemetry.prediction_misses().len(), 1); + } } diff --git a/tests/common/stubs.rs b/tests/common/stubs.rs index d5c2f39e..4ebdd348 100644 --- a/tests/common/stubs.rs +++ b/tests/common/stubs.rs @@ -28,7 +28,7 @@ pub struct GameStub { } #[repr(C)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] pub struct StubInput { pub inp: u32, } diff --git a/tests/common/stubs_enum.rs b/tests/common/stubs_enum.rs index dbb208c0..e193fae0 100644 --- a/tests/common/stubs_enum.rs +++ b/tests/common/stubs_enum.rs @@ -28,7 +28,7 @@ pub struct GameStubEnum { #[allow(dead_code)] #[repr(u8)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] pub enum EnumInput { #[default] Val1, diff --git a/tests/network-peer/src/main.rs b/tests/network-peer/src/main.rs index 927641e7..7843e0e4 100644 --- a/tests/network-peer/src/main.rs +++ b/tests/network-peer/src/main.rs @@ -266,7 +266,7 @@ fn compute_confirmed_game_value_with_diagnostics< } #[repr(C)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct TestInput { value: u32, } diff --git a/tests/sessions/macro_tests.rs b/tests/sessions/macro_tests.rs index 25890969..72302a01 100644 --- a/tests/sessions/macro_tests.rs +++ b/tests/sessions/macro_tests.rs @@ -20,7 +20,7 @@ use std::net::SocketAddr; // Test config for the macro tests struct MacroTestConfig; -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct MacroTestInput(u8); #[derive(Clone, Default, Debug, PartialEq)] diff --git a/tests/verification/invariants.rs b/tests/verification/invariants.rs index 04080b34..eb4189ff 100644 --- a/tests/verification/invariants.rs +++ b/tests/verification/invariants.rs @@ -47,7 +47,7 @@ use std::net::SocketAddr; // ============================================================================ #[repr(C)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { value: u8, } diff --git a/tests/verification/property.rs b/tests/verification/property.rs index c98d72ed..8b69ff24 100644 --- a/tests/verification/property.rs +++ b/tests/verification/property.rs @@ -43,7 +43,7 @@ use std::net::SocketAddr; // ============================================================================ #[repr(C)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize, Debug)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize, Debug)] struct TestInput { value: u8, } diff --git a/wiki/Architecture.md b/wiki/Architecture.md index 773cee24..a6c20a8c 100644 --- a/wiki/Architecture.md +++ b/wiki/Architecture.md @@ -1099,7 +1099,7 @@ Compile-time parameterization bundles all type requirements: ```rust // Default (without `sync-send` feature): pub trait Config: 'static { - type Input: Copy + Clone + PartialEq + Default + Serialize + DeserializeOwned; + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned; type State; type Address: Clone + PartialEq + Eq + PartialOrd + Ord + Hash + Debug; } diff --git a/wiki/Determinism-Model.md b/wiki/Determinism-Model.md index 162ca00e..99be6dbc 100644 --- a/wiki/Determinism-Model.md +++ b/wiki/Determinism-Model.md @@ -140,7 +140,7 @@ fn spawn_enemy(state: &mut GameState, rng: &mut SeededRng) { ```rust // Must implement these traits pub trait Config: 'static { - type Input: Copy + Clone + PartialEq + Default + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned; // ... } diff --git a/wiki/Fortress-vs-GGRS.md b/wiki/Fortress-vs-GGRS.md index efe4e283..454e92ad 100644 --- a/wiki/Fortress-vs-GGRS.md +++ b/wiki/Fortress-vs-GGRS.md @@ -329,6 +329,7 @@ pub enum InvalidFrameReason { NotConfirmed { confirmed_frame: Frame }, NullOrNegative, MissingState, + ReplayExhausted { last_frame: Frame }, Custom(&'static str), } ``` diff --git a/wiki/Home.md b/wiki/Home.md index 668b99c2..c5482861 100644 --- a/wiki/Home.md +++ b/wiki/Home.md @@ -48,7 +48,7 @@ use serde::{Deserialize, Serialize}; use std::net::SocketAddr; // Define your input and state types -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct MyInput { buttons: u8 } #[derive(Clone, Serialize, Deserialize)] diff --git a/wiki/Migration.md b/wiki/Migration.md index 0e603db2..dae420c4 100644 --- a/wiki/Migration.md +++ b/wiki/Migration.md @@ -151,6 +151,35 @@ struct MyAddress { } ``` +## Input Trait Bounds (Breaking Change) + +`Config::Input` now requires `Eq` in addition to `PartialEq`. This ensures reflexive +equality for deterministic rollback; non-reflexive types (e.g., `f32`, `f64`) would cause +phantom prediction misses because `NaN != NaN` can make the engine treat identical inputs +as different, triggering unnecessary rollbacks. + +Most custom input types only need an extra derive: + +```rust +// Before +#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +struct MyInput { + buttons: u8, + stick_x: i8, +} + +// After +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] +struct MyInput { + buttons: u8, + stick_x: i8, +} +``` + +> **Note:** All primitive integer types (`u8`, `i8`, `u16`, `i16`, `u32`, `i32`, `u64`, +> `i64`, `u128`, `i128`, `usize`, `isize`) and `bool` already implement `Eq`, so input +> structs composed entirely of these types only need the added derive. + ## Features The `sync-send` feature flag remains compatible. Fortress Rollback adds several new features: diff --git a/wiki/Overview.md b/wiki/Overview.md index 03dd873f..83c943d6 100644 --- a/wiki/Overview.md +++ b/wiki/Overview.md @@ -6,9 +6,9 @@ This directory contains formal specification documents for the Fortress Rollback ## Documents -| Document | Description | -| ----------------------------------------- | ------------------------------------------------------ | -| [formal-spec.md](Formal-Specification) | Core formal specifications using TLA+ notation | +| Document | Description | +| -------------------------------------------- | ------------------------------------------------------ | +| [formal-spec.md](Formal-Specification) | Core formal specifications using TLA+ notation | | [api-contracts.md](API-Contracts) | API preconditions, postconditions, and invariants | | [determinism-model.md](Determinism-Model) | Determinism requirements and verification | | [spec-divergences.md](Spec-Divergences) | Documented differences between spec and implementation | diff --git a/wiki/Replay.md b/wiki/Replay.md new file mode 100644 index 00000000..0c18e25e --- /dev/null +++ b/wiki/Replay.md @@ -0,0 +1,306 @@ + + +

+ Fortress Rollback +

+ +# Match Replay System + +Record P2P matches and play them back deterministically. Use replays for match review, determinism verification, streaming, and testing. + +## Table of Contents + +1. [How It Works](#how-it-works) +2. [Quick Start -- Recording](#quick-start----recording) +3. [Quick Start -- Playback](#quick-start----playback) +4. [Validation Mode](#validation-mode) + - [How Validation Works](#how-validation-works) + - [Validation Playback Loop](#validation-playback-loop) +5. [API Reference](#api-reference) +6. [Use Cases](#use-cases) +7. [Common Patterns](#common-patterns) + - [into_replay vs take_replay](#into_replay-vs-take_replay) + - [Replay Browser with Metadata](#replay-browser-with-metadata) + +--- + +## How It Works + +```mermaid +stateDiagram-v2 + direction LR + + state "P2P Session" as p2p { + [*] --> Recording : with_recording(true) + Recording --> Recording : confirmed inputs + checksums per frame + Recording --> ReplayReady : session ends + } + + state "Serialization" as ser { + ReplayReady --> Bytes : to_bytes() + Bytes --> File : save + } + + state "Playback" as play { + File --> LoadedBytes : load + LoadedBytes --> Replay : from_bytes() + Replay --> ReplaySession : new(replay) + ReplaySession --> FrameByFrame : advance_frame() + FrameByFrame --> Complete : is_complete() + } +``` + +--- + +## Quick Start -- Recording + +```rust +use fortress_rollback::{SessionBuilder, Config, Session, PlayerType, PlayerHandle}; +use std::net::SocketAddr; + +// 1. Enable recording on the session builder +let mut session = SessionBuilder::::new() + .with_num_players(2)? + .add_player(PlayerType::Local, PlayerHandle::new(0))? + .add_player(PlayerType::Remote("10.0.0.2:7000".parse()?), PlayerHandle::new(1))? + .with_recording(true) // <-- enable replay recording + .start_p2p_session(socket)?; + +// 2. Run your game loop (see user-guide.md for the full loop) +// ... + +// 3. Extract the replay when the session ends +let replay = session.into_replay()?; + +// 4. Serialize and save to disk +let bytes = replay.to_bytes()?; +std::fs::write("match.replay", &bytes)?; +``` + +> **into_replay vs take_replay** +> +> Use `into_replay()` when the session is finished -- it consumes the session. +> Use `take_replay()` to extract the replay mid-session without consuming it (e.g., for auto-save). +> +--- + +## Quick Start -- Playback + +```rust +use fortress_rollback::replay::Replay; +use fortress_rollback::sessions::replay_session::ReplaySession; +use fortress_rollback::{Config, Session, FortressRequest}; + +// 1. Load and deserialize +let bytes = std::fs::read("match.replay")?; +let replay = Replay::::from_bytes(&bytes)?; + +// 2. Create a ReplaySession +let mut session = ReplaySession::::new(replay)?; + +// 3. Play back frame by frame +while !session.is_complete() { + let requests = session.advance_frame()?; + + for request in requests { + match request { + FortressRequest::AdvanceFrame { inputs } => { + // 4. Apply each player's input to your game state + for (input, status) in &inputs { + game_state.apply_input(*input); + } + game_state.advance(); + } + _ => {} // No other requests in standard playback + } + } +} +``` + +--- + +## Validation Mode + +`ReplaySession::new_with_validation(replay)` enables **checksum comparison** during playback. This detects non-determinism by comparing freshly computed checksums against the ones recorded during the original match. + +### How Validation Works + +```mermaid +sequenceDiagram + participant App as Application + participant RS as ReplaySession + + loop Each Frame + App->>RS: advance_frame() + RS-->>App: SaveGameState { cell, frame } + App->>App: Compute game state checksum + App->>App: Store checksum in cell + RS-->>App: AdvanceFrame { inputs } + App->>App: Apply inputs, advance state + Note over RS: Next frame: compare
stored checksum vs
recorded checksum + end + + alt Checksum mismatch + RS-->>App: FortressEvent::ReplayDesync + end +``` + +> **Desync Detection** +> +> When a mismatch is found, the session emits `FortressEvent::ReplayDesync { frame, expected_checksum, actual_checksum }`. This pinpoints the exact frame where non-determinism occurred. +> +### Validation Playback Loop + +```rust +use fortress_rollback::replay::Replay; +use fortress_rollback::sessions::replay_session::ReplaySession; +use fortress_rollback::{Config, Session, FortressRequest, FortressEvent}; + +let replay = Replay::::from_bytes(&bytes)?; +let mut session = ReplaySession::::new_with_validation(replay)?; + +while !session.is_complete() { + let requests = session.advance_frame()?; + + for request in requests { + match request { + FortressRequest::SaveGameState { cell, frame } => { + // Compute and store your game state checksum + let checksum = game_state.compute_checksum(); + cell.save(frame, Some(game_state.clone()), Some(checksum)); + } + FortressRequest::AdvanceFrame { inputs } => { + for (input, status) in &inputs { + game_state.apply_input(*input); + } + game_state.advance(); + } + _ => {} + } + } + + // Check for desync events + for event in session.events() { + match event { + FortressEvent::ReplayDesync { + frame, + expected_checksum, + actual_checksum, + } => { + eprintln!( + "DESYNC at frame {}: expected {:#x}, got {:#x}", + frame.as_i32(), + expected_checksum, + actual_checksum + ); + } + _ => {} + } + } +} +``` + +--- + +## API Reference + +### Replay\ + +| Item | Description | +|------|-------------| +| `num_players` | Number of players in the recorded match | +| `frames` | `Vec>` -- inputs per frame, one entry per player | +| `checksums` | `Vec>` -- per-frame checksums for validation | +| `metadata` | `ReplayMetadata` -- version, player count, frame count | +| `to_bytes()` | Serialize to bytes (deterministic bincode codec) | +| `from_bytes(&[u8])` | Deserialize from bytes | +| `total_frames()` | Number of recorded frames | +| `validate()` | Check internal consistency (frames/checksums/metadata) | + +### ReplaySession\ + +| Method | Description | +|--------|-------------| +| `new(replay)` | Standard playback (validates replay on construction) | +| `new_with_validation(replay)` | Playback with per-frame checksum validation | +| `advance_frame()` | Advance one frame, returns requests with recorded inputs. Returns an error if the replay is exhausted (all frames played). | +| `is_complete()` | `true` when all frames have been played | +| `current_frame()` | Current frame number (`Frame::NULL` before first advance) | +| `total_frames()` | Total frames in the replay | +| `is_validating()` | `true` if checksum validation mode is enabled | +| `replay()` | Reference to the underlying `Replay` | +| `events()` | Drain pending events (e.g., `ReplayDesync`) | + +### SessionBuilder Methods + +| Method | Description | +|--------|-------------| +| `with_recording(bool)` | Enable input recording on a P2P session | +| `start_replay_session(replay)` | Create a standard playback session | +| `start_replay_session_with_validation(replay)` | Create a validating playback session | + +### P2PSession Methods + +| Method | Description | +|--------|-------------| +| `is_recording()` | Check if recording is enabled | +| `into_replay()` | Consume the session and extract the `Replay` | +| `take_replay()` | Extract the `Replay` without consuming the session | + +--- + +## Use Cases + +- **Match review** -- Let players rewatch completed matches frame by frame +- **Determinism testing** -- Use validation mode to catch non-determinism bugs across builds or platforms +- **Streaming / broadcasting** -- Send compact replay data to spectators for synchronized playback +- **Bug reproduction** -- Attach replay files to bug reports for exact reproduction +- **Anti-cheat verification** -- Re-simulate matches server-side to verify client-reported results + +--- + +## Common Patterns + +### into_replay vs take_replay + +| Method | Consumes session? | Use when... | +|--------|:-:|---| +| `into_replay()` | Yes | Match is over, session no longer needed | +| `take_replay()` | No | Mid-match auto-save, or extracting replay while session continues | + +> **Note** +> +> `take_replay()` consumes the recorded data. A second call returns an error because the recording has already been taken. +> +### Replay Browser with Metadata + +```rust +use fortress_rollback::replay::{Replay, ReplayMetadata}; + +// Store replays with searchable metadata +struct ReplayEntry { + path: std::path::PathBuf, + metadata: ReplayMetadata, +} + +// Load replays from a directory and extract metadata for display +fn list_replays(dir: &std::path::Path) -> Vec { + std::fs::read_dir(dir) + .into_iter() + .flatten() + .filter_map(|entry| { + let path = entry.ok()?.path(); + let bytes = std::fs::read(&path).ok()?; + let replay = Replay::::from_bytes(&bytes).ok()?; + Some(ReplayEntry { path, metadata: replay.metadata }) + }) + .collect() +} +``` + +--- + +> **Breaking Change** +> +> `FortressEvent::ReplayDesync` is a new enum variant. Since `FortressEvent` is **not** `#[non_exhaustive]`, exhaustive `match` statements must be updated to handle this variant. Add a `FortressEvent::ReplayDesync { .. } => { .. }` arm to all existing matches. +> diff --git a/wiki/Telemetry.md b/wiki/Telemetry.md new file mode 100644 index 00000000..552f10be --- /dev/null +++ b/wiki/Telemetry.md @@ -0,0 +1,340 @@ + + +

+ Fortress Rollback +

+ +# Session Telemetry + +Monitor P2P session performance with structured telemetry events. Track rollbacks, prediction misses, frame advances, and network stats in real time. + +## Table of Contents + +1. [Architecture](#architecture) +2. [Quick Start](#quick-start) +3. [`SessionTelemetry` Trait](#sessiontelemetry-trait) +4. [`TelemetryEvent` Enum](#telemetryevent-enum) +5. [`CollectingTelemetry` (Built-in)](#collectingtelemetry-built-in) +6. [Custom Telemetry Observer](#custom-telemetry-observer) +7. [Spec Violation Observability](#spec-violation-observability) + - [`ViolationObserver` Trait](#violationobserver-trait) + - [`SpecViolation` Struct](#specviolation-struct) + - [`CollectingObserver`](#collectingobserver) + - [`TracingObserver`](#tracingobserver) + - [`ViolationKind` Variants](#violationkind-variants) + - [`ViolationSeverity` Levels](#violationseverity-levels) +8. [Event Flow](#event-flow) +9. [Use Cases](#use-cases) +10. [Integration Tips](#integration-tips) +11. [See Also](#see-also) + +--- + +## Architecture + +```mermaid +graph TD + P2P["P2PSession"] -->|calls| ST["SessionTelemetry trait"] + + ST --> R["on_rollback"] + ST --> PM["on_prediction_miss"] + ST --> NS["on_network_stats"] + ST --> FA["on_frame_advance"] + + subgraph Built-in Implementations + CT["CollectingTelemetry
testing"] + CU["Custom impl
your own"] + end + + ST -.->|impl| CT + ST -.->|impl| CU +``` + +--- + +## Quick Start + +```rust +use fortress_rollback::telemetry::{CollectingTelemetry, SessionTelemetry}; +use fortress_rollback::SessionBuilder; +use std::sync::Arc; + +// 1. Create a telemetry observer +let telemetry = Arc::new(CollectingTelemetry::new()); + +// 2. Pass to session builder +// MyConfig: your Config impl (see user-guide.md) +let builder = SessionBuilder::::new() + .with_telemetry(telemetry.clone()); + +// 3. After running the session, inspect events +let rollbacks = telemetry.rollbacks(); +let misses = telemetry.prediction_misses(); +println!("Rollbacks: {}, Prediction misses: {}", rollbacks.len(), misses.len()); +``` + +--- + +## `SessionTelemetry` Trait + +```rust +// With `sync-send` feature enabled: +pub trait SessionTelemetry: Send + Sync { + fn on_rollback(&self, depth: usize, frame: Frame) { /* no-op */ } + fn on_prediction_miss(&self, player: PlayerHandle, frame: Frame) { /* no-op */ } + fn on_network_stats(&self, player: PlayerHandle, stats: &NetworkStats) { /* no-op */ } + fn on_frame_advance(&self, frame: Frame) { /* no-op */ } +} + +// Without `sync-send` feature: +pub trait SessionTelemetry { + // same methods, no Send + Sync bounds +} +``` + +> **Note** +> +> All methods have default no-op implementations. Override only what you need. The `Send + Sync` supertraits are only required when the `sync-send` feature is enabled. +> +| Method | Parameters | When Called | +|--------|-----------|-------------| +| `on_rollback` | `depth: usize`, `frame: Frame` | State was rolled back | +| `on_prediction_miss` | `player: PlayerHandle`, `frame: Frame` | Predicted input was wrong | +| `on_network_stats` | `player: PlayerHandle`, `stats: &NetworkStats` | Network stats polled | +| `on_frame_advance` | `frame: Frame` | Frame advanced | + +--- + +## `TelemetryEvent` Enum + +Each variant captures the arguments from its corresponding trait method. + +| Variant | Fields | When | +|---------|--------|------| +| `Rollback` | `depth: usize`, `frame: Frame` | State was rolled back | +| `PredictionMiss` | `player: PlayerHandle`, `frame: Frame` | Predicted input was wrong | +| `NetworkStatsUpdate` | `player: PlayerHandle`, `stats: NetworkStats` | Network stats polled | +| `FrameAdvance` | `frame: Frame` | Frame advanced | + +--- + +## `CollectingTelemetry` (Built-in) + +Thread-safe observer that accumulates all events for later inspection. + +| Method | Returns | +|--------|---------| +| `new()` | Empty collector | +| `events()` | `Vec` -- all events | +| `rollbacks()` | `Vec` -- filtered rollback events | +| `prediction_misses()` | `Vec` -- filtered prediction misses | +| `network_stats_updates()` | `Vec` -- filtered network stats | +| `frame_advances()` | `Vec` -- filtered frame advances | +| `len()` | `usize` -- event count | +| `is_empty()` | `bool` -- no events collected? | +| `clear()` | Clear all collected events | + +--- + +## Custom Telemetry Observer + +Implement `SessionTelemetry` for your own metrics system: + +```rust +use fortress_rollback::telemetry::SessionTelemetry; +use fortress_rollback::{Frame, PlayerHandle}; +use fortress_rollback::NetworkStats; +use std::sync::atomic::{AtomicUsize, Ordering}; + +struct MetricsTelemetry { + rollback_count: AtomicUsize, + prediction_miss_count: AtomicUsize, +} + +impl SessionTelemetry for MetricsTelemetry { + fn on_rollback(&self, depth: usize, _frame: Frame) { + self.rollback_count.fetch_add(1, Ordering::Relaxed); + tracing::info!(depth, "rollback occurred"); + } + + fn on_prediction_miss(&self, player: PlayerHandle, frame: Frame) { + self.prediction_miss_count.fetch_add(1, Ordering::Relaxed); + tracing::debug!(%player, %frame, "prediction miss"); + } +} +``` + +--- + +## Spec Violation Observability + +The telemetry module also provides a structured pipeline for specification violations -- internal invariant failures detected at runtime. + +```mermaid +graph TD + LIB["Library internals"] -->|report_violation!| VO["ViolationObserver trait"] + + subgraph Implementations + TO["TracingObserver
default, logs via tracing"] + CO["CollectingObserver
testing"] + MO["CompositeObserver
multiple observers"] + end + + VO -.->|impl| TO + VO -.->|impl| CO + VO -.->|impl| MO +``` + +### `ViolationObserver` Trait + +```rust +// With `sync-send` feature enabled: +pub trait ViolationObserver: Send + Sync { + fn on_violation(&self, violation: &SpecViolation); +} + +// Without `sync-send` feature: +pub trait ViolationObserver { + // same method, no Send + Sync bounds +} +``` + +### `SpecViolation` Struct + +Each violation carries structured context: + +| Field | Type | +|-------|------| +| `severity` | `ViolationSeverity` | +| `kind` | `ViolationKind` | +| `message` | `String` | +| `location` | `&'static str` | +| `frame` | `Option` | +| `context` | `BTreeMap` | + +**Builder methods:** + +| Method | Description | +|--------|-------------| +| `new(severity, kind, message, location)` | Create a new violation | +| `with_frame(frame)` | Attach a frame reference | +| `with_context(key, value)` | Add a key-value context entry | +| `to_json()` | `Option` -- JSON string (requires `json` feature) | +| `to_json_pretty()` | `Option` -- pretty JSON string (requires `json` feature) | + +### `CollectingObserver` + +Thread-safe observer that accumulates all violations for later inspection. + +| Method | Returns | +|--------|---------| +| `new()` | Empty collector | +| `violations()` | `Vec` — all collected violations | +| `len()` | Number of violations | +| `is_empty()` | No violations collected? | +| `has_violation(kind)` | Any violation of this kind? | +| `has_severity(severity)` | Any violation at this severity? | +| `violations_of_kind(kind)` | Filtered by kind | +| `violations_at_severity(min)` | Filtered by minimum severity | +| `clear()` | Remove all collected violations | + +### `TracingObserver` + +Default observer that maps severity levels to tracing log levels: `Warning` → `tracing::warn!`, `Error`/`Critical` → `tracing::error!`. All fields are emitted as structured tracing fields. + +### Plugging In + +```rust +use fortress_rollback::telemetry::CollectingObserver; +use fortress_rollback::SessionBuilder; +use std::sync::Arc; + +let observer = Arc::new(CollectingObserver::new()); +// MyConfig: your Config impl (see user-guide.md) +let builder = SessionBuilder::::new() + .with_violation_observer(observer.clone()); + +// After session operations +assert!(observer.violations().is_empty(), "unexpected violations"); +``` + +### `ViolationKind` Variants + +| Variant | Description | +|---------|-------------| +| `FrameSync` | Frame synchronization invariant violated | +| `InputQueue` | Input queue invariant violated | +| `StateManagement` | State save/load invariant violated | +| `NetworkProtocol` | Network protocol invariant violated | +| `ChecksumMismatch` | Checksum or desync detection issue | +| `Configuration` | Configuration constraint violated | +| `InternalError` | Internal logic error (library bug) | +| `Invariant` | Runtime invariant check failed | +| `Synchronization` | Sync protocol issues | +| `ArithmeticOverflow` | Arithmetic overflow detected | + +### `ViolationSeverity` Levels + +| Level | Meaning | +|-------|---------| +| `Warning` | Unexpected but recoverable -- operation continued with fallback | +| `Error` | Serious issue -- operation may have degraded behavior | +| `Critical` | Critical invariant broken -- state may be corrupted | + +--- + +## Event Flow + +```mermaid +sequenceDiagram + participant Game as Game Loop + participant Session as P2PSession + participant Telemetry as SessionTelemetry + + Game->>Session: advance_frame() + Session->>Telemetry: on_frame_advance(frame) + + Note over Session: Remote input arrives late + Session->>Telemetry: on_prediction_miss(player, frame) + Session->>Telemetry: on_rollback(depth, target_frame) + + Note over Session: Re-simulate frames + loop For each re-simulated frame + Session->>Telemetry: on_frame_advance(frame) + end + + Game->>Session: network_stats(player) + Session->>Telemetry: on_network_stats(player, stats) +``` + +--- + +## Use Cases + +- **Performance monitoring** -- Track rollback frequency and prediction accuracy over time +- **Network quality dashboards** -- Aggregate `NetworkStatsUpdate` events per peer +- **Automated testing assertions** -- Use `CollectingTelemetry` to assert rollback counts, prediction accuracy +- **Debug overlays** -- Display rollback count, ping, and frame advantage in a HUD + +--- + +## Integration Tips + +> **Performance** +> +> Keep observer callbacks fast -- they run inline during frame processing. Offload heavy work (file I/O, network sends) to a background thread. +> +> **Tip** +> +> Use `Arc` for testing, a custom `SessionTelemetry` impl for production. +> +> **Safety** +> +> Both `SessionTelemetry` and `ViolationObserver` require `Send + Sync` when the `sync-send` feature is enabled. The `sync-send` feature is not a default feature and must be explicitly opted into. All built-in implementations are thread-safe regardless of feature flags. +> +--- + +## See Also + +- [User Guide](User-Guide) — integrating Fortress Rollback into your game +- [Architecture Overview](Architecture) — system design and module structure diff --git a/wiki/User-Guide.md b/wiki/User-Guide.md index 1dc0449d..068b6faf 100644 --- a/wiki/User-Guide.md +++ b/wiki/User-Guide.md @@ -67,7 +67,7 @@ use serde::{Deserialize, Serialize}; use std::net::SocketAddr; // 1. Define your input type -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] struct MyInput { buttons: u8, } @@ -164,7 +164,7 @@ use std::net::SocketAddr; // Your input type - sent over the network #[repr(C)] -#[derive(Copy, Clone, PartialEq, Default, Serialize, Deserialize)] +#[derive(Copy, Clone, PartialEq, Eq, Default, Serialize, Deserialize)] pub struct GameInput { pub buttons: u8, pub stick_x: i8, @@ -193,7 +193,7 @@ impl Config for GameConfig { Your input type must: -- Be `Copy + Clone + PartialEq` +- Be `Copy + Clone + PartialEq + Eq` - Implement `Default` (used for disconnected players) - Implement `Serialize + Deserialize` (for network transmission) @@ -1888,7 +1888,7 @@ fortress-rollback = { version = "0.6", features = ["sync-send"] } ```rust pub trait Config: 'static { - type Input: Copy + Clone + PartialEq + Default + Serialize + DeserializeOwned; + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned; type State; type Address: Clone + PartialEq + Eq + PartialOrd + Ord + Hash + Debug; } @@ -1898,7 +1898,7 @@ pub trait Config: 'static { ```rust pub trait Config: 'static + Send + Sync { - type Input: Copy + Clone + PartialEq + Default + Serialize + DeserializeOwned + Send + Sync; + type Input: Copy + Clone + PartialEq + Eq + Default + Serialize + DeserializeOwned + Send + Sync; type State: Clone + Send + Sync; type Address: Clone + PartialEq + Eq + PartialOrd + Ord + Hash + Send + Sync + Debug; } diff --git a/wiki/_Sidebar.md b/wiki/_Sidebar.md index 93b352a5..25275119 100644 --- a/wiki/_Sidebar.md +++ b/wiki/_Sidebar.md @@ -5,6 +5,8 @@ ## Documentation - [User Guide](User-Guide) +- [Match Replay](Replay) +- [Session Telemetry](Telemetry) - [Architecture](Architecture) - [Migration](Migration)