32 GB VRAM for less $1k sounds like a steal these days, and I’m sure it’s not getting cheaper any time soon.

Does anyone here use this GPU? Or any recent Arc Pros? I basically want someone to talk me out of driving to the nearest place that has it in stock and getting $1k poorer.

  • cecilkorik@piefed.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 hours ago

    I’m not sure Intel has great drivers or compatibility for AI, it might be sort of janky and limiting. Even with an AMD card I’ve struggled a lot, there are still plenty of things that only support Nvidia, full stop.

    Edit to add: If you’re willing to consider this option, you might also be interested in say, an Nvidia P40 or similar card. P40s have 24GB VRAM and you can pick them up cheap as can be, like $100-$200 price range. You need to 3d print or buy a fan cooler bracket for them. They are janky and limiting in a different way, they are old datacenter-style cards, they don’t have fans or do video at all, they are a bit slow at AI tasks and only run on specific Nvidia drivers. But having plenty of VRAM for that low of a price is nice, and you can tune them down to about 125W so you can run a few of them if you can find enough slots or risers for them. AI usage does NOT require PCI-E X16 bandwidth, in my experience 4X is plenty and even 1X is probably okay with some penalty.

    Edit again: Ah it looks like I guess P40s have doubled in price now too, the VRAM scavengers finally got to them. Still, if you were willing to go up to 1k price point, it’s still viable.

    • pound_heap@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      Thanks! I was running some models on my RX 9070XT, but only Ollama works flawlessly. I couldn’t make llama.Cpp to run Gemma4 or the newer Qwen - maybe I’m hitting that incompatibility, but probably it’s the skill issue.

      P40 doesn’t look very appealing. 32 GB V100 costs about the same as 2xP40, less VRAM in total, but it’s faster, will use less power.

      But I’m not sure if I follow you on the PCIe… If I run a model that spans multiple GPUs, doesn’t PCIe bandwidth matter?

      • cecilkorik@piefed.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 minutes ago

        It matters a bit but not as much as you’d think, it also depends on what model and what runner you’re using I think. There’s a lot of optimizing that can be done, but in practice, you’re not going to notice huge difference in speed. Again, remember that these are not speed-demon cards to begin with, the main feature is the large VRAM capacity that allows them to run very large, powerful models without the speed penalty of dropping to system RAM, and that, at least where I understand it, is where the PCIe starts to matter a lot more. That said, maybe I’m wrong. I’m not an expert at this stuff and my experience is limited to what little hardware in what limited configurations I have available personally. Best of luck in your adventures, anything we can do to democratize machine learning technology to more people is worth pursuing I think.