Here's a small openpilot bug clip with a small dash of Big Altima Energy:
replicate-prediction-3iy65fzbw3bygdqvuid3duganu.mp4
Capture and develop clips of openpilot from comma.ai's Comma Connect.
The clipper can produce clips of:
- comma.ai openpilot UI (including desired path, lane lines, modes, etc.)
- Route metadata is branded into the clip for debugging and reporting, including the route id, platform, git remote, branch, commit,
Dirtystate, and a running route timer. Useful for posting clips in the comma.ai Discord's #driving-feedback and/or #openpilot-experience channel, reddit, Facebook, or anywhere else that takes video. Very useful for making outstanding bug reports as well as feedback on good behavior.
- Route metadata is branded into the clip for debugging and reporting, including the route id, platform, git remote, branch, commit,
ui-alt, a telemetry-present alternate UI render family with explicit compositions:devicekeeps one main camera view and adds telemetry alongside itstacked_forward_over_wideshows the forward/road view above the wide viewstacked_wide_over_forwardshows the wide view above the forward/road view
driver-debug, a driver camera replay/debug layout- Replays the driver camera without the normal mirror effect, draws a coarse driver-face box estimate, and adds a large telemetry footer with driver monitoring state, awareness, distraction, pose/model values, and route/git metadata. Useful for debugging DM behavior and building better DM bug reports.
- Forward, Wide, and Driver Camera with no UI
- Concatenate, cut, and convert the raw, low-compatibility, and separated HEVC files to one fairly compatible HEVC MP4 or super-compatible H.264 MP4 for easy sharing.
- 360 Video
- Rendered from Wide and Driver Camera. Uploadable to YouTube, viewable in VLC, loadable in 360 video editing software such as Insta360 Studio or even the Insta360 mobile app, and accepted by any video players or web services that take 360 videos.
- Forward Upon Wide and 360 Forward Upon Wide
- Forward video is automatically projected onto the wide video using logged camera calibration. Not perfect, but much better aligned than the old manual overlay.
- 360 Forward Upon Wide scales and renders the final result at a higher resolution to assist in reframing the 360 video to a normal video if that's what you want.
All clip options have a configurable target file size option as platforms like Discord limit file upload sizes.
The clipper is deployed on Replicate:
https://replicate.com/nelsonjchen/op-replay-clipper
Replicate is an ultra-low-cost pay-as-you-go compute platform for running software jobs. Replicate is a great way to run this clipper as it's fast, easy to use, and you don't need to install anything on your computer or even deploy anything yourself. Just enter in the required information into the form, and Replicate will generate a clip. Expect to pay about ~$0.01 per clip but not even need to put in any payment details until you've reached a generously large level of usage.
On Replicate and cog predict, the route input is now URL-only. The clip timing comes from the connect.comma.ai URL itself, so there are no separate startSeconds or lengthSeconds inputs anymore.
Warning
comma devices should not be used as primary dashcams for numerous reasons!
They are still great as a backup dashcam, openpilot, and for other purposes though.
- Route - A drive recorded by openpilot. Generally from Ignition On to Ignition Off.
- comma.ai device that can upload to comma Connect.
- Free GitHub account to log into Replicate with
- A comma lite or prime subscription.
- Clipping was a comma connect prime-only feature but was removed for refurbishment. This is a free and open source tool to do the same.
We assume you've already paired your device and have access to the device with your comma connect account.
- Visit comma connect and select a route.
- Scrub to the time you want to clip.
- Now I need to select the portion of the route I want to clip. Here's a video of what that UI looks like
-
See how I drag and select a portion.
-
You can see me make a mistake but pressing the left arrow (β) in the top-left corner lets me re-expand and try to trim again.
-
The clipper has a maximum length of 5 minutes. Try to select a portion that's less than that. Try to aim for 20 seconds to a minute though as everybody else has short attention spans.
-
Video:
scrubdown.mp4
-
- Once satisified with the selected portion, prepare the route and files for rendering.
- Make sure all files are uploaded. Select "Upload All" under the "Files" dropdown if you haven't already and make sure it says
uploaded. You may need to wait and your device may need to be on for a while for all files to upload. - Make sure the route has "Public access" under "More info" turned on. You can set this to off after you're done with clip making.
- Make sure all files are uploaded. Select "Upload All" under the "Files" dropdown if you haven't already and make sure it says
- Copy the URL in the address bar of your browser to your clipboard. This is not the segment ID underneath the More Info button. In the case above, I've copied an old URL of "https://connect.comma.ai/fe18f736cb0d7813/1698203405863/1698203460702" to my clipboard.
- Note: comma has changed the URL format since this step/guide was originally written. Current URLs are like "https://connect.comma.ai/fe18f736cb0d7813/000001bb--4c0c0efba9/21/90". It has a dongle ID, a new route designator format, and the time is relative to the route itself.
- When you were adjusting the selected portion of the route in a previous step, it was changing those last two numbers in the browser address bar URL which is the start time and end time respectively.
- "Share This Route" button if it is present will work too. Choose "copy to clipboard" or similar.
- Visit https://replicate.com/nelsonjchen/op-replay-clipper
- Under
route, paste the URL you copied in the previous step. - Tweak any settings you like.
- Press
Run. - Wait for the clip to render. It may take a few minutes.
- Once done, you can download the clip. If you want, turn off "Public access" on the route after you're done.
-
Here's a generated clip with the
widerendering type with no UI:cog-clip.1.mp4
-
If you have issues downloading the clip with the "Download" button in Replicate's UI, click on the vertical ellipsis button or whatever is available in your browser for video in the lower right corner of the video and download via that. This is a strange issue in Replicate's UI that this clipper can't do anything about.
-
You can reupload this file onto Discord. Be aware of Discord's file size limits. Discord Free users should target 9MB file sizes for rendering to slip in under the 10MB limit.
-
UI rendering works by actually running the openpilot UI on the remote server, feeding it recorded route data, and then recording the rendered output.
Unfortunately, there's sometimes some state tracked in the openpilot UI. Past data may be needed to be sent to get the UI to the correct state at the beginning of the clip. We need to smear the start.
Lack of or insufficient smearing can cause:
- No lead car marker (for openpilot longitudinal)
- Desire path coloring being green when openpilot actually had the gas suppressed in gating.
Those can be important information in describing what has happened.
One way to describe this issue would be like on a movie set. Let's say you are a director and you want to have a shot where the actor is already running. You would say "lights", roll the "camera", and then say "ACTION!". In post, the editor would not include the clapboard, the director yelling "ACTION!" or the actor starting to run. They would splice the film when the actor is already running in stride.
The smear point is when the clipper does some cutting after "ACTION!". It's the amount of seconds before the clip is to start. The clipper "production crew" aims starts recording immediately (CAMERA) once the data has started to be sent (LIGHTS) but a later "editor" will cut the intermediate clip some "smear" seconds later as the actual beginning and return that to you.
Due to this, you may need to upload an additional minute of video and data before the current start point for UI renders. You may need to adjust the quick usage steps above accordingly by selecting a minute before your desired start point and uploading the data, if you get segments not uploaded errors.
Demonstration of speed or longitudinal behavior of openpilot with model-based longitudinal is nearly impossible or hard without this clipper. This video is of a good model based long behavior at highway speeds.
ae_driving_highway.mp4
Cars can have bugs themselves. Here's my 2020 Corolla Hatchback phantomly braking on metal strips in stop and go traffic probably from the radar. Perhaps a future openpilot that doesn't depend on radar might be the one sanity checking the radar instead of the other way around currently. And another example of that in Portland.
phantom-radar-braking.mp4
funky-ramp-redux.mp4
This is a video of a bug report where openpilot's lateral handling lost the lane.
lost-line.mp4
Lane cutting?
cutty-uphills.mp4
Nav-assisted follow the road instead of taking the side road.
2023-06-27--15-58-05--14_nav_stays_left_woooo.mp4
Copying the car in front to get around someone waiting for the left turn
interesting-assisted-right-pass.mp4
Search up the readme for 360 stuff! It's pretty cool.
20241113_202627_206-00.00.00.000-00.00.13.893.mp4
- The UI replayed is comma.ai's latest stock UI on their master branch; routes from forks that differ alot from stock may not render correctly. Your experience may and will vary. Please make sure to note these replays are from fork data and may not be representative of the stock behavior. The comma team really does not like it if you ask them to debug fork code as "it just takes too much time to be sidetracked by hidden and unclear changes".
Learn how to bookmark, preserve, and flag interesting points on a drive/route.
Preservation saves the last couple segments from being deleted on your device as well.
With the car on, within a minute after an incident when it is safe to do so:
- Tap the screen to reveal a bookmark flag button in the bottom left if it isn't there already.
- Tap that icon.
- This will result in small slivers of yellow in the timeline you can quickly hone in on.
- You should also set the route to preserve under More Info while you're working on it. Non-comma Prime users need to heed this especially since while files aren't deleted on the device, visiblity in and through comma connect sunsets after 3 days.
- With regards to the clipper usage, during the process in which you are honing in on the start and end boundaries of the clip, your upper bound of the clip will nearly all the time be at that yellow so your first or early drags to hone down should basically top out there and be very generous with the start time before the yellow.
Tip
If you find it a hassle to reach out and touch the device or it is too inconvenient, try installing a custom macropad like the π¦Ύ comma three Faux-Touch keyboard!
Use clip.py as the primary local entrypoint for cheap validation on macOS or Linux before paying for GCE runs.
Repo layout:
- repo root: user-facing entrypoints such as
clip.py,cog_predictor.py, andreplicate_run.py core/: shared runtime modules for orchestration, route inputs, downloading, integration, and bootstraprenderers/: UI and video renderer implementationscog/andcommon/: build/bootstrap helpers for Cog and image setup
BIG UI is the supported UI target.
If you want the detailed background on the repo-owned BIG UI engine, runtime patches, and the headless acceleration path, see docs/runtime-patching-and-ui-rendering.md. If you want the inventory of upstream/openpilot/Cog modifications that this repo currently depends on, see docs/upstream-modifications.md. For a milestone-oriented history of how the project got here, see CHANGELOG.md. For a concrete pre-promotion smoke checklist, see docs/prod-readiness-checklist.md.
Examples:
uv sync
uv run python clip.py ui "https://connect.comma.ai/<dongle>/<route>/<start>/<end>"
uv run python clip.py ui-alt "https://connect.comma.ai/<dongle>/<route>/<start>/<end>"
uv run python clip.py ui-alt "https://connect.comma.ai/<dongle>/<route>/<start>/<end>" --ui-alt-variant device
uv run python clip.py ui-alt "https://connect.comma.ai/<dongle>/<route>/<start>/<end>" --ui-alt-variant stacked_wide_over_forward
uv run python clip.py driver-debug "https://connect.comma.ai/<dongle>/<route>/<start>/<end>"
uv run python clip.py forward "a2a0ccea32023010|2023-07-27--13-01-19" --demoDriver backing-video face anonymization:
uv run python clip.py driver --demo --length-seconds 20 \
--driver-face-anonymization facefusion \
--driver-face-profile driver_face_swap_passenger_hidden \
--passenger-redaction-style blur \
--driver-face-source-image ./assets/driver-face-donors/generic-donor-clean-shaven.jpg \
--driver-face-preset fast \
--output ./shared/driver-facefusion.mp4
uv run python clip.py driver --demo --length-seconds 20 \
--driver-face-anonymization facefusion \
--driver-face-profile driver_face_swap_passenger_hidden \
--passenger-redaction-style silhouette \
--driver-face-selection auto_best_match \
--driver-face-donor-bank-dir ./assets/driver-face-donors \
--driver-face-preset fast \
--output ./shared/driver-facefusion-silhouette.mp4
uv run python clip.py driver --demo --length-seconds 20 \
--driver-face-anonymization facefusion \
--driver-face-profile driver_face_swap_passenger_hidden \
--passenger-redaction-style black_silhouette \
--driver-face-selection auto_best_match \
--driver-face-donor-bank-dir ./assets/driver-face-donors \
--driver-face-preset fast \
--output ./shared/driver-facefusion-black-silhouette.mp4
uv run python clip.py driver --demo --length-seconds 20 \
--driver-face-anonymization facefusion \
--driver-face-profile driver_face_swap_passenger_hidden \
--passenger-redaction-style ir_tint \
--driver-face-selection auto_best_match \
--driver-face-donor-bank-dir ./assets/driver-face-donors \
--driver-face-preset fast \
--output ./shared/driver-facefusion-ir-tint.mp4
uv run python clip.py driver-debug --demo --length-seconds 20 \
--driver-face-anonymization facefusion \
--driver-face-profile driver_unchanged_passenger_hidden \
--passenger-redaction-style blur \
--driver-face-source-image ./assets/driver-face-donors/generic-donor-clean-shaven.jpg \
--driver-face-preset fast \
--output ./shared/driver-debug-facefusion.mp4
uv run python clip.py 360 --demo --length-seconds 20 \
--driver-face-anonymization facefusion \
--driver-face-profile driver_face_swap_passenger_hidden \
--passenger-redaction-style blur \
--driver-face-selection auto_best_match \
--driver-face-donor-bank-dir ./assets/driver-face-donors \
--driver-face-preset fast \
--output ./shared/driver-360-facefusion.mp4Tiny RF-DETR-only repro:
uv sync
./scripts/smoke_rf_detr_repro.sh --backend local-cli
./scripts/smoke_rf_detr_repro.sh --backend local-cog
./cog/render_artifacts.sh
cog push --file cog-rfdetr-repro.yaml r8.im/nelsonjchen/op-replay-clipper-rfdetr-repro-beta
uv run python rf_detr_repro_run.py \
--model 'nelsonjchen/op-replay-clipper-rfdetr-repro-beta:<version>' \
--input ./shared/rf-detr-repro-inputs/tiny-clip.mp4 \
--output ./shared/rf-detr-repro-hosted-artifacts.zipThe bundled donor bank lives in assets/driver-face-donors. It currently keeps full light/medium/dark tone coverage for masculine donors, while the active feminine bank is intentionally limited to younger light/medium donors plus a feminine clean-shaven fallback, with additional masculine glasses/beard variants. To regenerate the checked-in bank with Runware FLUX Kontext, use:
export RUNWARE_API_KEY=...
./.cache/facefusion/.venv/bin/python tools/generate_driver_face_donor_bank.py --skip-existingBIG UI smoke test:
uv run python clip.py ui --demo --qcam --length-seconds 2 --output ./shared/demo-big-ui-clip.mp4UI variant matrix smoke test:
./scripts/smoke_ui_alt_matrix.shExact-sync BIG UI smoke test:
make ui-exact-smokeDriver debug smoke test:
uv run python clip.py driver-debug --demo --length-seconds 2 --output ./shared/demo-driver-debug-clip.mp4driver-debug is the DM-focused openpilot render. It replays the driver camera through openpilot's driver camera dialog, keeps the camera unmirrored, draws the repo-owned face box estimate, and renders a footer with awareness, distraction, model timing, pose, and route/build metadata.
Notes:
clip.pyis the primary local CLI for UI and non-UI rendersdriver-debugis an openpilot-backed render type likeuiandui-alt, but it only needsdcamerasandlogsdriver,driver-debug,360, and360_forward_upon_widecan optionally anonymize the backing driver video with--driver-face-anonymization facefusion--driver-face-profilecontrols who is swapped versus hidden:driver_unchanged_passenger_hidden,driver_unchanged_passenger_face_swap,driver_face_swap_passenger_hidden, anddriver_face_swap_passenger_face_swap--passenger-redaction-stylecontrols how hidden passengers are rendered and supportsblur,silhouette(white),black_silhouette, andir_tint- Old
...passenger_pixelizeprofile slugs are still accepted as compatibility aliases, but they now map to hidden-passenger +blur ir_tintis a stylized night-camera-inspired burgundy treatment, not a literal infrared reconstruction- That anonymization path reuses the repo-owned DM face track, uses FaceFusion for swapped seats, and uses the shared RF-DETR full-body redaction path for hidden passengers before the final driver-video render
- Every anonymized output now burns a bright mode-specific banner into the driver video, for example
PASSENGER BLURRED,PASSENGER SILHOUETTED, orDRIVER SWAPPED, PASSENGER BLURRED, so viewers can tell what was actually changed --driver-face-preset fastis the practical default for short clips, whilequalitytrades more time for cleaner masking and higher-resolution swapping--driver-face-selection auto_best_matchruns a short same-tone donor search against the donor bank, writes a<output>.driver-face-selection.jsonsidecar report, then uses the selected donor for the final swap- Driver-backed anonymization also needs
logs, because the face crop is driven by driver-monitoring telemetry rather than a fresh detector pass driver-debuguses the same hidden preroll/cut behavior as the UI renderers so the visible clip starts after the DM state has initialized- BIG UI renders now use a repo-owned exact-frame runner instead of the old coarse 20 Hz chunk mapping, so lane lines and path overlays stay aligned to the logged road camera frames
- The BIG UI renderer also does a hidden 1-second warmup before recording so the visible clip starts with initialized video/UI state instead of a blank opening
- BIG UI units are auto-detected from the route's logged
IsMetricparam when present, and otherwise default to imperial pyproject.tomldeclares compatible dependency ranges anduv.lockpins the exact resolved environmentuv syncbootstraps the local Python environment used by the local CLI- On macOS it prefers a local acceleration policy for ffmpeg-based renders where available
- It clones/updates
openpilotinto./.cache/openpilot-localfor openpilot-backed renders such asui,ui-alt, anddriver-debug --openpilot-repo-urllets you point local bootstrap at an SSH remote if you want to reuse Git agent forwarding or a closer mirror- It runs
uv sync --frozen --all-extrasand builds the native modules needed by the repo-owned BIG UI exact-frame runner - On macOS it applies the same
OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YESworkaround used by upstreamtools/install_python_dependencies.sh uv run pytestruns the local refactor tests./cog/render_artifacts.shexportsrequirements-cog.txtfromuv.lockfor Cog, so the local and Cog dependency sets stay alignedcog.yamlandrequirements-cog.txtare generated artifacts and are intentionally not committed
Use driver_face_eval.py for local-only benchmark prep when you want clean
driver clips plus a DM-guided face-track crop for trying anonymization or
face-replacement approaches against real comma driver-camera footage.
The built-in seed set currently materializes:
mici-baselinetici-baselinetici-occlusion
Outputs for each sample land under ./shared/driver-face-eval/<sample-id>/:
driver-source.mp4- clean full-frame driver clipface-crop.mp4- square DM-guided crop clip resized for model inputface-track.json- per-frame ROI sidecar with telemetry and crop geometryevaluation.md- scoring template for candidate methodsdriver-debug-analysis.mp4- optional debug/analysis render
Materialize the default seed set:
uv run python driver_face_eval.py seed-setInclude a driver-debug analysis clip for the same samples:
uv run python driver_face_eval.py seed-set --include-driver-debugMaterialize one custom sample:
uv run python driver_face_eval.py sample my-sample \
'https://connect.comma.ai/<dongle>/<route>/<start>/<end>' \
--start-seconds 90 \
--length-seconds 2You can also run the hosted Replicate model from this repo with the Python client and a local .env.
- Put your API token in
.env:
REPLICATE_API_TOKEN=...- Sync the uv environment:
uv sync- Run a hosted prediction and save the returned file locally:
uv run python replicate_run.py \
--url 'https://connect.comma.ai/a2a0ccea32023010/1690488131496/1690488136496' \
--render-type driver-debug \
--output ./shared/replicate-run-driver-debug.mp4Notes:
replicate_run.pyuses the hosted Replicate model version, not a local Cog/container run- pass
--model <owner>/<model>:<version>to target a specific hosted Replicate model version during smoke tests - the script loads
REPLICATE_API_TOKENfrom.envviapython-dotenv - it prints the remote file URL when Replicate returns one, then writes the file to the path you passed with
--output - the hosted helper now takes a full
connect.comma.aiclip URL and does not expose separatestart-secondsorlength-secondsflags .envis ignored by git;.env.exampleis the committed placeholder
This repo now assumes stock cog 0.17.2+ for Replicate deploys.
Upstream Cog fixed the earlier raw-URL coercion regression for plain str
inputs, so hosted model versions can once again accept normal
https://connect.comma.ai/... route URLs without a custom patched runtime.
The local parser still accepts literal:https://... as a backwards-compatible
input form, but it is no longer the recommended deploy or smoke-test path.
For the full current deploy flow, including staging pushes, production pushes, and post-promotion verification, see docs/deploying-to-replicate.md.
There is a JWT Token input field. This is for users who do not wish to set a route to be "Public access". There is a major catch though. The JWT Token is valid for 90 days and is irrevocable in any way. Password changes from SSO account logins like in Comma Connect will not invalidate the token. Addtionally, it is not granular, meaning it will give access to all routes for the user if leaked.
If you share a JWT Token with anyone, they will be able to access all your routes for 90 days with no possibility of revocation from you. This is why it's not recommended to use this feature unless you know what you're doing compared to the "Public access" method which is much easier to revoke access to.
Tokens can be obtained from visiting https://jwt.comma.ai/ and logging in with the same comma connect account type. Tokens should be about 181 characters or longer.
After you run something, just use your browser to "Duplicate" the tab, change the settings for the next thing, and press Run. Replicate will queue up jobs and if necessary, even scale up to run multiple jobs in parallel. Very cool!
360 videos are cool but sometimes you want a normal video pointing at a specific direction or directions from that data.
reframed-bright.mp4
With 360 videos, it is possible to reframe the 360 video so it is a non-360 video to a normal video pointing at a specific direction.
The best current way to do this is to use a 360 video editor like Insta360 Studio to reframe the video to a normal video. Simply load the 360 video into the editor and reframe the video to the desired direction. A more through description of this functionality can be found at their site.
The Insta360 mobile apps also allow using the phone's movement and swipes for a more natural reframing as well. That is also described at their site
20241113_202627_206-00.00.00.000-00.00.13.893.mp4
There may be alternative software that'll do it and I will take pull requests to add them to this README, but this is the best way I know how to do it and it is free.
The 360 Forward Upon Wide rendering option scales input videos and renders the final result in a much higher 8K resolution to assist reframing with a high resolution forward video. The normal 360 option just glues the videos together.
If wanting to use 360 Forward Upon Wide, test with the non-360 Forward Upon Wide option first so you can quickly sanity-check the route's automatic camera alignment before paying for the larger 360 output.
The real MVP is @deanlee for the replay tool in the openpilot project. The level of effort to develop the replay tool is far beyond this project. This tool builds on that replay work to make clipping videos practical.
https://github.com/commaai/openpilot/blame/master/tools/replay/main.cc
A lot of the FFmpeg commands is based off of @ntegan1's research and documentation including a small disclosure of some but not all details by @incognitojam when @incognitojam was at comma.
https://discord.com/channels/469524606043160576/819046761287909446/1068406169317675078
@morrislee provided original data suitable to try to reverse engineer 360 clips.







