Replies: 2 comments 1 reply
-
|
The first thing I've done so far is to decipher the maia-hdl toplevel module's RX path. The input can be either taken directly from the input or the maia decimator. Since the satellites transmit at each their own distinct symbol rates, a generic resampler is unavoidable. For this reason I think it's better to take the input from the ADC FIFO directly. The output is a more difficult story. Maia-hdl as-is makes use of only two of the four AXI-HP buses of the processor. My guess based on the Zynq documentation is that this is due to there only being two DMA engines associated with these ports, thus using more than two AXI-HP interfaces doesn't increase the available bandwidth. Please let me know if I misinterpreted anything or my assumptions are wrong somewhere. In any case, I'll start working on the HDL toplevel design with an axi-stream (or similar) input and an axi-hp output. I'll also dive into the documentation of the Zynq 7000 series in the hopes of learning more about the DMA system and the processor side of things. |
Beta Was this translation helpful? Give feedback.
-
|
Interesting thought, but from an architectural point of view would it be good to put parts of an application into the firmware of a device? Think of modularity, dependencies, etc. That is a consideration you need to take. What are the pros and cons? Peter |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi!
Tezuka works great with SatDump, it's one of the platforms that have code to leverage the extra sample rate provided by Tezuka (by going down to 8 bit values). As I've hinted to it in the SatDump discord/matrix server, I'm interested in taking this a step further and bring Tezuka and SatDump into a "Symbiosys" (the name I coined for the project).
The idea is to put the demodulator front-end into FPGA, and channel decoding into the firmware. SatDump lends itself very well to this concept: it has three levels of processing: baseband to soft (symbols), soft to cadu, cadu to product. Currently, it's possible to run the first two steps in "server mode", and the third on a different machine. This is being used by people to build receivers into remote places and minimize traffic between the bulk storage and the stations.
The main limitation of Zynq-based SDRs is - as far as I know - the gigabit ethernet interface. If we could reduce traffic - similar to the case above - the AD9361 could go up to 61.44 MHz (with reports of users over-clocking it to 122MHz on USRP B210 clones). The two arm cores of the Zynq 7010 or 7020 can't handle this bandwidth (most x86 machines have a hard time with processing tens of megasamples per second live), but the FPGA fabric should easily be able to acheive 60 MSPS if architected carefully. The processor would have to deal only with soft-symbols, which can be packed more densly than 8-bit samples. Decoding the soft-symbols into bits condenses the data even further, making it possible to break free from the gigabit bottleneck.
This would allow real-time decoding of X-band signals (notorius for their high bandwidth requirements and high data rates), something that's not trivial to do with current software decoding (although there have been some developments on that front). The requirement of fast and plentiful storage to record a pass' worth of data would be eliminated, and the requirements for SDRs would be lowered (currently the cheapest option that can do enough bandwidth is the hamgeek b210 clone at 300$ whereas 7020 options can be found for 133$ and 7010 are starting at 100$).
Modifications to SatDump would be minimal, but it has to be able to configure the FPGA with the correct demodulator and receive soft symbols from it, while decoding them into cadu level and serving them with the existing server protocol.
This SatDump stub needs to be built into Tezuka, along with the drivers that configure the functions implemented in FPGA.
And obviously the demodulator hdl needs to be implemented and integrated into the existing maia-hdl project.
I feel proficient enough in HDL development so that I can undertake this part of the project, but I don't know much about yocto, petalinux, kernel drivers or even the structure of Tezuka and SatDump. This is where I'll need help from both projects, which is why I'm starting this discussion.
At this phase, I'd like to set down a preliminary design: understand the major architecture of the pieces present, figure out where it'd make sense to include my code, get some idea on the hdl-software interface and set some performance goals. I want to keep my design as flexible as possible, so that adding new functionality (e.g. new satellite pipelines) will be easy, and moving functionality into FPGA gradually is possible (for step-by-step testability).
For now I'd like to hear some feedback about the project, where am I wrong, what are the possible points of failure, which parts don't seem feasible at all?
Beta Was this translation helpful? Give feedback.
All reactions