I’m looking to build a low-end ollama LLM server to improve home assistant voice control, Immich image recognition and a few other services. With the current cost of hardware components like memory, I’m looking to build something small, but somewhat expandable.
I have an old micro-atx form factor computer that I’m thinking will be a good option to upgrade. I’d love recommendations on motherboards, processors, and video card combos that would likely be compatible and sufficient to run a decent server while keeping costs lower, basically, the best bang for the buck. I have a couple of M.2 SSDs I can re-purpose. Would prefer the motherboard has 2.5Gbit Ethernet, but otherwise I’m open.
Also recommendations on sites to purchase good quality memory at reasonable prices that ship to the US. I’d be willing to look at lightly used components, too.
Any advice on any of these topics would be greatly appreciated. The advice I’ve found has all been out of date especially with crypto fading so video cards are not as expensive, but LLM data centers eating up and reserving memory before it’s even manufactured.


not a very popular opinion, иге if you want an inexpensive, really inexpensive variant, take the AMD MX9070XT. AMD is not the most popular AI cards, but they are not bad with ROCm and for the price of 5090 you can put 5 cards (80 GB vram)
Not all programs allow usage of multiple gpus as far as I know, some are not capable of splitting the llm in multiple vrams or something
Yes, it is. But I have llama-swap, openweb-ui. If you spend some time on the llama-swap configuration, then the you have a good chance to run the model on 2 cards is through llama.cpp. The winnings, however, will not be x2 of course and will fall non-linearly from the number of cards. And you need motherboard with good PCI-E lines (2 pci-e x16 or more). But it’s still cheaper than one large card. Example:
HIP_VISIBLE_DEVICES=0,1 \ /opt/llama.cpp/build/bin/llama-server \ --host 127.0.0.1 \ --port 8082 \ --model /storage/models/model.gguf \ --n-gpu-layers all \ --split-mode layer \ --tensor-split 1,1 \ --ctx-size 32768 \ --batch-size 512 \ --ubatch-size 512 \ --flash-attn on \ --parallel 1There is a less stable but more productive one: --split-mode row
P.S. By the way, one RX9070XT on my instance translates posts and comments. You can test it if you want. =)
I agree. I’ve got a 9060XT 16GB card running some version of gpt-oss:20b. I understand how to program more or less but I do it so infrequently that I forget the syntax of whatever language I’m working in. It’s ability to spit out boiler plate code that I can edit for my needs has been a huge time saver and I’m extremely happy with my setup.