

I guess it’s just google sans, they use this placeholder elsewhere too


I guess it’s just google sans, they use this placeholder elsewhere too


oh, LXC containers! I see. I never used them because I find LXC setup more complicated, once tried to use a turnkey samba container but couldn’t even figure out where to add the container image to LXC, or how to start if not that way.
but also, I like that this way my random containerized services use a different kernel, not the main proxmox kernel, for isolation.
Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral.
I don’t understand this point. on docker, it’s rare that you need to touch the Dockerfile (which contains the container image build instructions). did you mean the docker compose file? or a script file that contains a docker run command?
also, you can run commands or open a shell in any container with docker, except if the container image does not contain any shell binary (but even then, copying a busybox or something to a volume of the container would help), but that’s rare too.
you do it like this: docker exec -it containername command. bit lengthy, but bash aliases help
Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.
in docker I don’t allocate memory, and it’s not common to do so. it shares the system memory with all containers. docker has a rudimentary resource limit thingy, but what’s better is you can assign containers to a cgroup, and define resource limits or reservations that way. I manage cgroups with systemd “.slice” units, and it’s easier than it sounds


just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there’s an ages old issue of this on their github repo, closed because they don’t care.
It’s matomo analytics, so not as bad as some big tech, but still.


unless you have zillion gigabytes of RAM, you really don’t want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can’t, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.
I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.


Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.
why. it was not telling that they should quit self hosting. it was not condescending either, I think. it was about work.
but truth be told IT is a very wide field, and maybe that generalization is actually not good. still, 15 containers is not much, and as I see it they help with not letting all your hosted software make a total mess on your system.
working with the terminal sometimes feels like working with long tools in a narrow space, not being able to fully use my hands, but UX design is hard, and so making useful GUIs is hard and also takes much more time than making a well organized CLI tool.
in my experience the most important here is to get used to common operations in a terminal text editor, and find an organized directory structure for your services that work for you. Also, using man pages and --help outputs. But when you can afford doing it, you could scp files or complete directories to your desktop for editing with a proper text editor.


What needs more than 1gbe? Are you streaming 8k?
I think they wanted to mean it was a bottleneck while moving to the new hardware
and a TCP handshake alone is what… Less than 1MB?
less than 1 KB


was duplicate


if it is signed by a key used in public repos of the commiter, or otherwise known to possess the key, that is proof, yes


I think copyparty can handle partial transfers both ways


the commiter name in the repo is not ironclad proof, anyone can upload commits to their repo in Linus Torvalds’ name. but github probably has the capabilities to find out who was the original uploader of the commit, or what was the upstream repo of a fork


oh that’s good to know, thanks!


Tracker-free? There’s literally no way anyone could track you through RSS. It’s just an XML file and can’t run any arbitrary code.
there is actually. the user agent string, ip address, when do the images get loaded. if clients can user server provided CSS, that too can do some conditional reports. but yes the possibilities are much fewer, they are easier to fix.


well Instagram is serious about ratelimits


thats possible apparently: https://lemmy.dbzer0.com/comment/23840511


I have never heard anyone refer to TV as social media, I have always heard it in context with facebook, twitter and co.
a better objection, which makes me uncertain whether lemmy is social media, is that this is a pseudonymous forum where its not common for users to become friends or know each other, and discussion is not around a specific news site or a specific person, but around specific topics


recommend trying Jitsi to your therapist. the main advantage of it to me is that it’s not made by big tech, who may be snooping in on the session. I love it, it has all the features and more
oh and also firefox. jitsi and firefox.
well there’s a firmware selector page, wherr you can request additional packages. maybe it can also be used for removing some?
but if you build it yourself you can do any changes you want, too
Don’t expect good Wi-Fi if you went with devboards like OpenWrt One.
why is that?
I heard blue iris can be run with wine on linux