

Idk of any but I’m interested, commenting to add traffic 👍


Idk of any but I’m interested, commenting to add traffic 👍


Firstly, it depends on how illegal it is. Is it illegal like you shouldn’t do it and we will try to block you? Or is it illegal like if we catch you do it, you can get arrested or worse?
Scenario A:
Scenario B:


Buying new: Basically all of the integrated memory units like macs and amd’s new AI chips, after that any modern (last 5 years) gpu while focusing only on vram (currently nvidia is more properly supported in SOME tools)
Buying second hand: not likely to find any of the integrated memory stuff, so any GPU from the last decade that is still officially supported and focusing on vram.
8gb is enough to run basic small models, 20+ for pretty capable 20-30b models, 50+ for the 70b ones and 100-200+ for full sized models.
These are rough estimates, do your own research as well.
For the most part with LLMs for a single user you really only care about VRAM and storage speed(ssd) Any GPU will perform faster than you can read for anything that fully fits on it’s VRAM, so the GPU only matters if you intend on running large models at extreme speeds (for automation tasks, etc) And the storage is a bottleneck at model load, so depending on your needs it might not be that big of an issue for you, but for example with a 30gb model you can expect to wait 2-10 minutes for it to load into the vram from an HDD, about 1 minute with a sata SSD, and about 4-30 seconds with an NVMe.


You can sniff the network and see if the TV is connecting anywhere.


It’s very very unlikely that your TV and your device connected to it both support and enable ethernet over HDMI by default. But if you are unsure you can test it by connecting and seeing if the TV is getting a connection.
Personally I also opened my TV and disconnected the wifi card since in theory the TV could also just try to connect to any open wifi in the area without me knowing, but to each their own threat model.
Tip, look at second hand sites/fb marketplace (i know 😒) you can find great deals.


Ollama + open webui + tailscale/netbird
Open webui provides a fully functional docker with ollama, so just find the section that applies to you (amd, nvidia, etc) https://github.com/open-webui/open-webui?tab=readme-ov-file#quick-start-with-docker-
And on that host install netbird or Tailscale, install the same on your phone, in tailscale you need to enable magicdns but in netbird I think it provides dns by default.
Once the docker is running and both your server and phone are connected to the vpn (netbird or tailscale) you just type the dns of your server in your phone’s browser (in netbird it would be “yourserver.netbird.cloud” and in tailscale it would be “yourserver.yourtsnet.ts.net”)
Checkout networkchuck on youtube as he has a lot of simple tutorials.
There are a few reasons why someone might use Proxmox. It doesn’t have to be just security, it can also be different network architectures that don’t work as well in Docker and it can also be just greater control over the services which is less comfortable to do in Docker as it’s meant to have built images that are running and are ephemeral. There are also certain services that either don’t have a pre-built docker and someone might not want to bother with making their own docker infrastructure around it or have technologies that are not well supported or are not well executed in docker.
There is also the fact that Proxmox is meant to be used in production, which means that it’s more stable (than some casual docker rubning on whatever distro they have) and it does have a very low overhead, even if you do use dockers you can use them within Proxmox and it gives you a lot of capabilities that add to stability and manageability.
Generally speaking if your threat model is very small, you’re running this within your private network, and it’s not exposed to the internet or anything large like that, then it doesn’t really make a big difference and you should probably just use whatever is comfortable for you.
I personally moved to Proxmox for three reasons which are security, customizability and stability. I felt that within Docker containers it was a lot more annoying to have to pull the images and make my own Docker files and update them and build them every time. I find it easier to have my own server with its dedicated service and that I know how to update and how to modify more properly and that I built from scratch. There is also the advantage that I can use whatever OS I want for different situations. Of course I personally use exclusively Linux but even within that I can use different distros and I can have all kinds of different services running without interfering with one another in any way, and in extreme cases I can have a windows vm.
And another major factor for me was that I just wanted to learn how to do it. I think it’s cool and it was interesting and I have already experienced Docker to a level that I felt comfortable with it and it was time to move on and expand my horizons.
Tip, if you have the room for it, looking for second hand servers (as in actual servers with server hardware) is often really useful.
As you start hosting more stuff you realize that ram and cpu cores are very limited in consumer hardware. With a shitty second hand server you could have more cores and more ram than anything in the consumer category, and you can stick an old GPU on it if you want some better media performance.
But if you truly believe that you won’t spread out and that potentially 64gb ram and 8 cores will suffice, just go ahead and build it however you want. It is no different from a regular build. Get a nice ssd, get a wired ethernet connection and you are like 90% of the way there.
Edit: everyone else is giving much better advice, ignore my overkill here. For media and simple game servers with a low energy consumption target you are probably better off with a mini pc with an integrated gpu or if you want to future proof a bit, maybe one of those unified memory ones where you ram is also the vram and can produce pretty good performance.


It’s not that uncommon because they have specific lengths, so usually just by the length you can know the checksum. Of course it’s not perfect, but for file verification it’s usually MD5, SHA1, or SHA256, so the length is enough to differentiate between them.
But yeah, dick move.


Only tailscale fpr vpn and backblaze for backup
Simple answer: Yes!
Not so simple: Yes, but nvidia hates linux and their proprietary drivers can cause issues. Generally (especially on stable distros) everything is stable and fine.


Careful! This is very dangerous, you should instead do
sudo chown -R user:user /*
Where “user” is your username, and then do
chmod -R 770 /*
This will make sure that only your user has all the access!
(Don’t do this)


Yes, but it was a huge corp that literally had it’s own linux community within the corp.


Generally, the file size of the model is slightly larger than the VRAM needed. That’s an easy way to estimate VRAM requirements.


Everyone is stealing your data, the US is doing so in the most intrusive and harmful way by far. If you don’t mind using chatgpt, you shouldn’t mind deepseek or qwen.
But really, you should avoid all of them as much as possible.


This is actually really smart. Instead of doing the classic 3 try where you flip the dongle twice only to find out that somehow the first time was right. Now you just try to insert it into each port, two wont work and the last one would, therefore saving you from the hard work of flipping it.
They commented on a social media post