I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.
My idea was to get a 3060, a pci riser and 500w power supply just for the gpu. Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it’s an sff machine.
What’s making me weary of going through is the specs of the 7010 itself: it’s a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)
Do you think it’s even worth going through?
Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don’t know if it will be good enough.
You can definitely quantize exl3s yourself; the process is vram light (albeit time intense).
What 13B are you using? FYI the old Llama2 13B models don’t use GQA, so even their relatively short 4096 context takes up a lot of vram. Newer 12Bs and 14Bs are much more efficient (and much smarter TBH).
right now i’m hopping between nemo finetunes to see how they fare. i think i only ever used one 8B model from Llama2, the rest is been all Llama 3 and maybe some solar based ones. unfortunately i have yet to properly dig into the more technical side of llms due to time contraints.
so long as it’s not interactive i can always run it at night and make it shut off the rig when it’s done. power here is cheaper at night anyways :-)
thanks for the info (and sorry for the late response, work + cramming for exams turned out to be more brutal than expected)