Installation scripts for an AI applications using ROCm on Linux.
Note
From version 10.0, the script is distribution-independent thanks to the use of Podman.
All you need is a correctly configured Podman and amdgpu.
Important
All models and applications are tested on a GPU with 24GB of VRAM.
Some applications may not work on GPUs with less VRAM.
| Name | Info |
|---|---|
| CPU | AMD Ryzen 9 9950X3D |
| GPU | AMD Radeon 7900XTX |
| RAM | 64GB DDR5 6600MHz |
| Motherboard | Gigabyte X870 AORUS ELITE WIFI7 (BIOS F8) |
| OS | Debian 13.3 |
| Kernel | 6.12.63+deb13-amd64 |
| Name | Links | Additional information |
|---|---|---|
| KoboldCPP | https://github.com/YellowRoseCx/koboldcpp-rocm | |
| SillyTavern | https://github.com/SillyTavern/SillyTavern | |
| TabbyAPI | https://github.com/theroyallab/tabbyAPI | 1. Put ExLlamaV2 model files into the models/example-model folder. 2. In run.sh change example-model to the name of your model folder. |
| llama.cpp | https://github.com/ggerganov/llama.cpp | 1. Put model.gguf into llama.cpp folder. 2. In run.sh file, change the values of GPU offload layers and context size to match your model. |
| Name | Link | Additional information |
|---|---|---|
| WhisperSpeech web UI | https://github.com/Mateusz-Dera/whisperspeech-webui | Install and run WhisperSpeech web UI first. |
| Name | Links | Additional information |
|---|---|---|
| ComfyUI | https://github.com/comfyanonymous/ComfyUI https://github.com/city96/ComfyUI-GGUF |
Workflows templates are in the workflows folder. Extension manager is installed by default. ComfyUI-GGUF is installed by default. |
| Name | Link | Additional information |
|---|---|---|
| Qwen-Image-2512-GGUF | https://huggingface.co/Qwen/Qwen-Image-2512 https://huggingface.co/unsloth/Qwen-Image-2512-GGUF https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI https://huggingface.co/Wuli-art/Qwen-Image-2512-Turbo-LoRA-2-Steps |
Uses Q5_0 quant. Uses 2-step turbo LoRA. |
| Qwen-Image-2511-Edit-GGUF | https://huggingface.co/Qwen/Qwen-Image-Edit-2511 https://huggingface.co/unsloth/Qwen-Image-Edit-2511-GGUF https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI https://huggingface.co/lightx2v/Qwen-Image-Edit-2511-Lightning |
Uses Q5_0 quant. Uses 4-step Lightning LoRA |
| Z-Image-Turbo | https://huggingface.co/Tongyi-MAI/Z-Image-Turbo https://huggingface.co/Comfy-Org/z_image_turbo |
|
| Wan2.2-TI2V-5B | https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged |
|
| ComfyUI-SUPIR | https://github.com/kijai/ComfyUI-SUPIR |
| Name | Links | Additional information |
|---|---|---|
| ACE-Step | https://github.com/ace-step/ACE-Step | |
| HeartMuLa | https://github.com/HeartMuLa/heartlib | Fodler name is heartlib |
| Name | Links | Additional information |
|---|---|---|
| WhisperSpeech web UI | https://github.com/Mateusz-Dera/whisperspeech-webui https://github.com/collabora/WhisperSpeech |
|
| F5-TTS | https://github.com/SWivid/F5-TTS | Remember to select voice. |
| Soprano | https://github.com/ekwek1/soprano https://github.com/Mateusz-Dera/soprano-rocm |
Uses my experimental fork for ROCm with vLLM |
| Name | Links | Additional information |
|---|---|---|
| PartCrafter | https://github.com/wgsxm/PartCrafter | Added custom simple UI. Uses a modified version of PyTorch Cluster for ROCm https://github.com/Mateusz-Dera/pytorch_cluster_rocm. |
| TRELLIS-AMD | https://github.com/CalebisGross/TRELLIS-AMD | GLB Export Takes 5-10 Minutes. Mesh preview may show grey, but the actual export works correctly. |
1. Install Podman.
Note
If you are using Debian 13.3, you can use sudo apt-get update && sudo apt-get -y install podman podman-compose qemu-system (should also work on Ubuntu 24.04)
2. Make sure that /dev/dri and /dev/kfd are accessible.
ls /dev/dri
ls /dev/kfdImportant
Your distribution must have amdgpu configured.
3. Make sure that your user has permissions for the video and render groups.
sudo usermod -aG video,render $USERImportant
If not, you need reboot after this step.
4. Clone repository.
git clone https://github.com/Mateusz-Dera/ROCm-AI-Installer.git5. Run installer.
./install.sh6. Set variables
Note
By default, the script is configured for AMD Radeon 7900XTX.
For other cards and architectures, edit GFX and HSA_OVERRIDE_GFX_VERSION.
7. Create a container if you are upgrading or running the script for the first time.
8. Install the applications of your choice.
9. Go to the application folder and run:
./run.shNote
Everything is configured to start from the host side (You don't need to enter the container).
To check if the container is running:
podman psIf the container is not running, start it with:
podman start rocmTo enter the container's bash shell:
podman exec -it rocm bashTo stop and remove the container:
podman stop rocm
podman rm rocmOr force remove (stop and remove in one command):
podman rm -f rocm