I’m looking to build a low-end ollama LLM server to improve home assistant voice control, Immich image recognition and a few other services. With the current cost of hardware components like memory, I’m looking to build something small, but somewhat expandable.
I have an old micro-atx form factor computer that I’m thinking will be a good option to upgrade. I’d love recommendations on motherboards, processors, and video card combos that would likely be compatible and sufficient to run a decent server while keeping costs lower, basically, the best bang for the buck. I have a couple of M.2 SSDs I can re-purpose. Would prefer the motherboard has 2.5Gbit Ethernet, but otherwise I’m open.
Also recommendations on sites to purchase good quality memory at reasonable prices that ship to the US. I’d be willing to look at lightly used components, too.
Any advice on any of these topics would be greatly appreciated. The advice I’ve found has all been out of date especially with crypto fading so video cards are not as expensive, but LLM data centers eating up and reserving memory before it’s even manufactured.


Yes, it is. But I have llama-swap, openweb-ui. If you spend some time on the llama-swap configuration, then the you have a good chance to run the model on 2 cards is through llama.cpp. The winnings, however, will not be x2 of course and will fall non-linearly from the number of cards. And you need motherboard with good PCI-E lines (2 pci-e x16 or more). But it’s still cheaper than one large card. Example:
HIP_VISIBLE_DEVICES=0,1 \ /opt/llama.cpp/build/bin/llama-server \ --host 127.0.0.1 \ --port 8082 \ --model /storage/models/model.gguf \ --n-gpu-layers all \ --split-mode layer \ --tensor-split 1,1 \ --ctx-size 32768 \ --batch-size 512 \ --ubatch-size 512 \ --flash-attn on \ --parallel 1There is a less stable but more productive one: --split-mode row
P.S. By the way, one RX9070XT on my instance translates posts and comments. You can test it if you want. =)