I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.
My idea was to get a 3060, a pci riser and 500w power supply just for the gpu. Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it’s an sff machine.
What’s making me weary of going through is the specs of the 7010 itself: it’s a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)
Do you think it’s even worth going through?
Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don’t know if it will be good enough.
at the moment i’m essentially lab ratting the models, i just love to see how far i can push them, both in parameters and in compexity of request. before they break down. plus it was a good excuse to expand my little “homelab” (read: workbench that’s also stuffed with old computers) form just a raspberry pi to something more beefy. as for more “practical” (still mostly to mess around) purposes. i was thinking about making a pseudo-realistic digital radio w/ announcer, using a small model and a TTS model: that is, writing a small summary for the songs in my playlists (or maybe letting the model itself do it, if i manage to give it search capabilites), and letting them shuffle, using the LLM+TTS combo to fake an announcer introducing the songs. i’m quite sure there was already a similar project floating around on github. another option would be implementing it in home assistant via something like willow as a frontend. to have something closer to commercial assistants like alexa, but fully controlled by the user.
to be honest, this post might have been the most positive interaction i’ve had on the web since the bbs days. i guess the fact the communities are smaller makes it easier to cobble up people that are genuinely interested in sharing and learing about this stuff, same with the homelab community. like comparing a local coffee shop to a starbucks, it just by nature filters for different people :-)