Skip to content

Question about Training Time and Multi-GPU Scaling of R-SCoRe #13

@AbbeGo

Description

@AbbeGo

Thank you for your excellent work on R-SCoRe. I have a question regarding the training time and multi-GPU scalability.

When training the final model on the Aachen dataset, I observe that the training process takes approximately 5–6 hours on a single RTX 4090 or RTX 3090 GPU. The training time appears to be similar on both GPUs.

Additionally, when using multiple GPUs for training, I do not observe a noticeable speed improvement compared to single-GPU training.

I would like to confirm:

Is a 5–6 hour training time expected for the final model?

Is the current implementation designed to benefit from multi-GPU parallelization?

If multi-GPU training is supported, are there specific configurations required to achieve speedup?

I would appreciate any clarification or guidance on this matter.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions