32 GB VRAM for less $1k sounds like a steal these days, and I’m sure it’s not getting cheaper any time soon.
Does anyone here use this GPU? Or any recent Arc Pros? I basically want someone to talk me out of driving to the nearest place that has it in stock and getting $1k poorer.
I’m going to be brutal with you. I spent a few thousand dollars on 176GB of AMD vram because I was happy with getting vram for cheap and I hate Nvidia. It works and its nice to be able to run bigger models at usable performance, but if you need serious concurrency or good support for diffusion, you NEED Nvidia. AMD(and likewise Intel) just doesn’t have the environment support for non-server GPUs. Again, coming from someone who’s using this shit daily.
If you understand this limitation, then yes those B70s are cool as are AMD Pro 9700 which might have slightly better support rn. You may consider nvidia V100s which are old and cheap. I always recommend people start with 3090s (as a general powerhouse) or a pair of 5060tis (for really hood llm support) though. It will make your life easy if you can afford the vram limitation
Thank you! This is really helpful. 32 GB V100 or pair of 5060ti s looks very interesting, and about the same price. Does running multiple GPUs require any special hardware? I mean apart from the motherboard with 2+ PCIe x16 slots?
It’s getting better, but yeah, I can’t run a lot of models.
I’m not sure Intel has great drivers or compatibility for AI, it might be sort of janky and limiting. Even with an AMD card I’ve struggled a lot, there are still plenty of things that only support Nvidia, full stop.
Edit to add: If you’re willing to consider this option, you might also be interested in say, an Nvidia P40 or similar card. P40s have 24GB VRAM and you can pick them up cheap as can be, like $100-$200 price range. You need to 3d print or buy a fan cooler bracket for them. They are janky and limiting in a different way, they are old datacenter-style cards, they don’t have fans or do video at all, they are a bit slow at AI tasks and only run on specific Nvidia drivers. But having plenty of VRAM for that low of a price is nice, and you can tune them down to about 125W so you can run a few of them if you can find enough slots or risers for them. AI usage does NOT require PCI-E X16 bandwidth, in my experience 4X is plenty and even 1X is probably okay with some penalty.
Edit again: Ah it looks like I guess P40s have doubled in price now too, the VRAM scavengers finally got to them. Still, if you were willing to go up to 1k price point, it’s still viable.
Thanks! I was running some models on my RX 9070XT, but only Ollama works flawlessly. I couldn’t make llama.Cpp to run Gemma4 or the newer Qwen - maybe I’m hitting that incompatibility, but probably it’s the skill issue.
P40 doesn’t look very appealing. 32 GB V100 costs about the same as 2xP40, less VRAM in total, but it’s faster, will use less power.
But I’m not sure if I follow you on the PCIe… If I run a model that spans multiple GPUs, doesn’t PCIe bandwidth matter?
buy a used one on eBay 🙂 doesn’t have to be the same model… but someone proble went got themselves a shiny new one and is getting rid of one they got last year…
Not this one. It IS a new shiny one, they coat $300-$400 more on eBay than in stores due to limited supply.

