32 GB VRAM for less $1k sounds like a steal these days, and I’m sure it’s not getting cheaper any time soon.
Does anyone here use this GPU? Or any recent Arc Pros? I basically want someone to talk me out of driving to the nearest place that has it in stock and getting $1k poorer.


Thanks! I was running some models on my RX 9070XT, but only Ollama works flawlessly. I couldn’t make llama.Cpp to run Gemma4 or the newer Qwen - maybe I’m hitting that incompatibility, but probably it’s the skill issue.
P40 doesn’t look very appealing. 32 GB V100 costs about the same as 2xP40, less VRAM in total, but it’s faster, will use less power.
But I’m not sure if I follow you on the PCIe… If I run a model that spans multiple GPUs, doesn’t PCIe bandwidth matter?
It matters a bit but not as much as you’d think, it also depends on what model and what runner you’re using I think. There’s a lot of optimizing that can be done, but in practice, you’re not going to notice huge difference in speed. Again, remember that these are not speed-demon cards to begin with, the main feature is the large VRAM capacity that allows them to run very large, powerful models without the speed penalty of dropping to system RAM, and that, at least where I understand it, is where the PCIe starts to matter a lot more. That said, maybe I’m wrong. I’m not an expert at this stuff and my experience is limited to what little hardware in what limited configurations I have available personally. Best of luck in your adventures, anything we can do to democratize machine learning technology to more people is worth pursuing I think.