Got a new PC handed down to me. And now have my old one collecting dust. It has a dedicated GPU (GTX 1060 6GB VRAM) i guess the most obvious thing would be an AI model or maybe jellyfin (which is currently running on a raspi 5 just fine for), but was wondering if you maybe had other suggestions?


Can confirm what another user said, that Intel iGPU would be better in your case.
I’ll let you know now – if it runs Windows kill it. My server was originally Windows running Docker Desktop. It hosted three services: Minecraft server which lagged like a bitch; Samba folder share; and Emby. Whenever Emby playback froze I knew Windows, whose antivirus kept running the HDD under constant load, had fucked the i6 6100 to 100%, which happened at least twice a day.
Moving on, now I run Proxmox. I host 25 services with the CPU at ~35% idle and 24GB RAM at 75%. Nothing lags.
Before I plugged in the GPU my server drew 25W consistently, going to 35W under load. With the GPU, an RTX 3060 11GB (used), it uses 85W idle, so make sure it’s worth it. For my case it not only transcodes for Emby and resumes streaming in a second, but also handles voice inference for Home Assistant in under a second, and mid-sized Ollama LLM responses. Would recommend a high VRAM Nvidia card (for CUDA) in that scenario, as my model Gemma3 7B uses 6GB VRAM and 2GB RAM. But a top model, say Dolphin-Mixtral 22B, needs 80GB storage, 17GB RAM and… Well I don’t have the RAM but you get it. LLMs are intensive.