I’m currently getting a lot of timeout errors and delays processing the analysis. What GPU can I add to this? Please advise.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 month ago

    I’m glad you posted this because I need similar advice. I want a GPU for Jellyfin transcoding and running Ollama (for a local conversation agent for Home Assistant), splitting access to the single GPU between two VMs in Proxmox.

    I would also prefer it to be AMD as a first choice or Intel as a second, because I’m still not a fan of Nvidia for their hostile attitude towards Linux and for proprietary CUDA.

    (The sad thing is that I probably could have accomplished the transcoding part with just integrated graphics, but my AMD CPU isn’t an APU.)