I want to buy a new GPU mainly for SD. The machine-learning space is moving quickly so I want to avoid buying a brand new card and then a fresh model or tool comes out and puts my card back behind the times. On the other hand, I also want to avoid needlessly spending extra thousands of dollars pretending I can get a ‘future-proof’ card.

I’m currently interested in SD and training LoRas (etc.). From what I’ve heard, the general advice is just to go for maximum VRAM.

  • Is there any extra advice I should know about?
  • Is NVIDIA vs. AMD a critical decision for SD performance?

I’m a hobbyist, so a couple of seconds difference in generation or a few extra hours for training isn’t going to ruin my day.

Some example prices in my region, to give a sense of scale:

  • 16GB AMD: $350
  • 16GB NV: $450
  • 24GB AMD: $900
  • 24GB NV: $2000

edit: prices are for new, haven’t explored pros and cons of used GPUs

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I’ve tried to find comparison data on performance between AMD Vs Nvidia and I see lots of people saying what you’re saying, but I can never find numbers. Do you know of any?

    If a card is less than half price, maybe I don’t mind it’s lower performance. It all depends on how much lower.

    Also, is the same true under Linux?

    • Cirk2@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Its highly dependent on implementation.

      https://www.pugetsystems.com/labs/articles/stable-diffusion-performance-professional-gpus/

      The experience on Linux is good (use docker otherwise python is dependency hell) but the basic torch based implementations (automatic, comfy) have bad performance. I have not managed to get shark to run on linux, the project is very windows focused and has no documentation for setup besides “run the installer”.

      Basically all of the vram trickery in torch is dependent on xformers, which is low-level cuda code and therefore does not work on amd. And has a running project to port it, but it’s currently to incomplete to work.