Replies: 2 comments 2 replies
-
|
Hi @rcbarke, As you noted, ACAR and ARC-OTA are the recommended platforms for such commercial-grade use cases. AODT is fully simulated, whereas Sionna-RK enables real-time RF transmission. Sionna-RK’s primary focus is to provide an accelerated research platform with a flexible, software-defined stack for experimentation, rather than deterministic near-RT operation (e.g., the USRP-based RF may cause a jitter already). In principle, bare-metal installation of Sionna-RK is possible (the required steps can be derived from the Dockerfile installation routines). However, please note that we cannot provide support for such deployments. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @rcbarke, a few comments on system configuration that are relevant to your description, in case you want to try them:
Hope it helps, |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello all,
I’ve been exploring alternative bare-metal deployment options for Sionna-RK beyond its default dockerized workflow, alongside accelerated traffic-generation approaches, with the goal of evaluating p90/p95 latency tails for 3GPP URLLC traffic profiles (e.g. V2X at ~20 B every 10 ms with 99.99% reliability targets in Rel-18).
In the default dockerized configuration, injecting traffic from UE namespaces or containers introduces several sources of timing variance:
These effects are especially pronounced in CPU-bound 5G simulators (e.g. baseline OAI). Sionna-RK’s shift to GPU-accelerated L1/L2 significantly improves determinism at the PHY/MAC level, but the system is still influenced by host-OS and container-level timing artifacts when measuring latency tails.
In larger operational stacks, we typically address these systems challenges with hardware-timed I/O paths. For example, NVIDIA ACAR leverages GPUDirect RDMA, CPU isolation, hugepages, VFIO/IOMMU, and PTP synchronization to stabilize slot timing and move packets directly from GPU L1/L2 to the RU via a DPU-accelerated 7.2x I/O interface.
Even on DGX Spark, I don’t expect Sionna-RK to match ACAR-class determinism; however, I’m very interested in understanding its achievable upper bound for near-RT latency studies. The Spark’s onboard ConnectX-7 NIC may eventually enable tighter integration in tandem with accelerated SDRs such as USRP X410, though today it does not fully support GPUDirect RDMA.
Have alternate (non-dockerized or partially bare-metal) SRK deployment modes been explored internally for studying latency tails or near-RT effects?
I’m particularly interested in whether Sionna-RK could serve as a middle ground between purely software-defined 5G simulations and fully hardware-timed stacks like ACAR for URLLC-focused research. AODT might be more appropriate presently, though Spark-driven SRK deployment veers towards the necessary hardware.
Thanks in advance,
Ryan
Beta Was this translation helpful? Give feedback.
All reactions