I'm not clear on what's actually taking this additional time:
Example, processing 6kb gpx file with 53 points:
Starting gopro-dashboard version 0.132.0
ffmpeg version is 6.1.1-3ubuntu5
Using Python version 3.14.3 (main, Feb 4 2026, 09:28:29) [GCC 13.3.0]
GPX/FIT file: 2026-03-12T18:21:46+00:00 -> 2026-03-12T18:48:16+00:00
Video File Dates: 2026-03-12T18:20:10+00:00 -> 2026-03-12T18:49:07+00:00
Timer(loading timeseries - Called: 1, Total: 2.25366, Avg: 2.25366, Rate: 0.44)
Generating overlay at Dimension(x=3840, y=2160)
Timeseries has 15901 data points
Processing....
Passing a larger time series which contains those points: (total: 16kb, 133 points):
Results in a multi-minute high cpu delay before video processing:
Starting gopro-dashboard version 0.132.0
ffmpeg version is 6.1.1-3ubuntu5
Using Python version 3.14.3 (main, Feb 4 2026, 09:28:29) [GCC 13.3.0]
GPX/FIT file: 2026-03-08T13:43:02+00:00 -> 2026-03-14T16:13:47+00:00
Video File Dates: 2026-03-12T18:20:10+00:00 -> 2026-03-12T18:49:07+00:00
Timer(loading timeseries - Called: 1, Total: 415.46367, Avg: 415.46367, Rate: 0.00)
Generating overlay at Dimension(x=3840, y=2160)
Timeseries has 17371 data points
Processing....
If I'm reading right, this is because the entire GPX duration is iterated in 0.1 second intervals while filtering, and this can be recreated by creating a single data point one day or so before the rest of the GPX data, causing 800k extra loops.
Instead, could the GPX points be filtered out based on the video start/stop times before stepping through them by time? For interpolation, the data point before and after the end of the video could be included, and timestep iteration could instead begin at the beginning of the video instead (at the appropriate position between the two data points).
I'm not clear on what's actually taking this additional time:
Example, processing 6kb gpx file with 53 points:
Passing a larger time series which contains those points: (total: 16kb, 133 points):
Results in a multi-minute high cpu delay before video processing:
If I'm reading right, this is because the entire GPX duration is iterated in 0.1 second intervals while filtering, and this can be recreated by creating a single data point one day or so before the rest of the GPX data, causing 800k extra loops.
Instead, could the GPX points be filtered out based on the video start/stop times before stepping through them by time? For interpolation, the data point before and after the end of the video could be included, and timestep iteration could instead begin at the beginning of the video instead (at the appropriate position between the two data points).