Skip to content

Run with calibrated videos #56

@Jianhao-zheng

Description

@Jianhao-zheng

Hi, thank you for sharing this amazing work!

I want to run Vipe on some SLAM benchmark dataset, which has calibrated intrinsic information. I made several small modifications to make it run with known intrinsics. I would appreciate if you can help confirm my implementation is sufficient.

  1. I added the following parameters to configs/slam/default.yaml
cam: 
  fx: ...
  fy: ...
  cx: ...
  cy: ...
  1. In this line:
    init_processors.append(GeoCalibIntrinsicsProcessor(video_stream, camera_type=self.camera_type))
    the added camera intrinsic parameters (self.slam_cfg.cam) are then parsed as an additional argument to GeoCalibIntrinsicsProcessor, and the parameter dict will be set as its attribute self.cam_cfg.
  2. I further modify the __call__ function here
    def __call__(self, frame_idx: int, frame: VideoFrame) -> VideoFrame:

    to:
def __call__(self, frame_idx: int, frame: VideoFrame) -> VideoFrame:
        frame.intrinsics = torch.as_tensor(
            [self.cam_cfg.fx, self.cam_cfg.fy, self.cam_cfg.cx, self.cam_cfg.cy]
        ).float()
        frame.camera_type = self.camera_type
        return frame
  1. Set the parameter pipeline.slam.optimize_intrinsics as false.

This approach seems to work, as the output intrinsics match my config. Is this the intended and sufficient way to use fixed intrinsics, or is there a better method?

Thanks for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions