

Yeah, that is the kind of concern for the service developer or a very opinionated sys admin. For self-hosting, few people will reach the workload where such a decision has any material or measurable impact.
Yeah, that is the kind of concern for the service developer or a very opinionated sys admin. For self-hosting, few people will reach the workload where such a decision has any material or measurable impact.
God forbid the racist asshat doesn’t feel welcome on the internet. Fuck asmongold, he is stupid and this argument is stupid, racist and made in bad faith. He is plain wrong, don’t defend the racist.
There is, but it’s not fun or interesting. Just weight the pasta. The change in volume is irrelevant, just make sure it is between 25 to 75 grams of dry pasta. Then you’ll get between 50 and 150 grams of pasta per person. The weight gain is absorbed water.
Don’t give them another idiotic justification for the tariff wars.
People are using NAS for things they aren’t meant to do. They are a storage service and aren’t supposed to be anything else. In a typical data center model, NAS servers are intermediate storage. Meant for fast data transfers, massive storage capabilities and redundant disk fault tolerance. We are talking hundreds of hard drives and hundred gigabit connection speeds inside the data center. This is expensive to run, so they are also very energy efficient, meant to keep the least amount of required disks spinning at any given moment.
They are not for video rendering, data wrangling, calculations or hosting dozens of docker containers. That’s what servers are for.
Servers have the processing power and host the actual services. They then request data from a NAS as needed. For example, a web service with tons of images and video will only have the site logic and UI images on the server itself. The content, video and images, will be on the NAS. The server will have a temporary cache where it will copy the most frequently accessed content and new content on demand. Any format conversion, video encoding, etc. Will be done by the server, not the NAS.
Now, on self-hosting of course, anything goes and they are just computers at the end of the day. But if a machine was purpose made for being a NAS server, it won’t have the most powerful processor, and that’s by design. They will have, however, an insane amount of sata, PCI-e channels and drive bays. And a ton of sophisticated hardware for data redundancy, hotswap capacity and high speed networks that is less frequent in servers.
In most cultures they used to be done by the village’s elders…
…
Just let that one sink in.
…
If you were lucky, they were old women. Asking medical professionals was actually a big improvement.
Just to be very clear. This is happening because you didn’t have MFA active. I know it hurts to hear, but this is why you always migrate first, wipe the device second. MFA would’ve allowed you several methods for proof of ID. If your phone gets stolen, then thieves can’t even use the phone for anything. You can remote wipe and block the device, and it turns into paperweight. The device nukes your data then locks the bootloader.
Beware, Gnucash is meant to be pro level accounting software. Is not a simple ledger or a tech/crypto gateway. I also use it for my personal life, but there’s like 30% of features I don’t use because they’re business accounting stuff I don’t need. It predates the cloud, it cares not for the latest trends, it crunches numbers and spits out reports. That’s part of what I like about it. It is not simple but it also isn’t bloated.
Oh yes, the very expensive Dev time cost of zero, because it is a fucking website.
On the contrary. It relies on the premise of segregating binaries, config and data. But since it is only running one app, then it is a bare minimum version of it. Most containers systems include elements that also deduplicate common required binaries. So, the containers are usually very small and efficient. While a traditional system’s libraries could balloon to dozens of gigabytes, pieces of which are only used at a time by different software. Containers can be made headless and barebones very easily. Cutting the fat, and leaving only the most essential libraries. Fitting in very tiny and underpowered hardware applications without losing functionality or performance.
Don’t be afraid of it, it’s like Lego but for software.
This is probably gonna blow your mind. But most shoes are worn on feet. Crazy, uh?
I’ve never in 15 years of Linux use and tinker have ever screwed a kernel. And I compiled LFS once.
That’s because of active directory. It makes managing hundreds of users, across as many devices, in a centralized manner, easier. You make a user for the person with the intended access scheme, hand them a random laptop imaged from a master system OS, and off they go with access to all the software and tools tied to their user login. There’s no similar alternative with a robust support service for Linux clients. If there were, then changing a culture to Linux clients wouldn’t have so much friction.
Go atomic immutable. Is it different? Yes. But the system is always updated without any package hell. Makes managing a system for others extremely simple. Bazzite for gamers, aurora for workstations, bluefin if you like Gnome.
That it exactly the department that should be spoiled with high spec equipment.
Just remember that they didn’t certify macOS for any practical reason, Apple was just weaseling out of a lawsuit and figured that paying the certification was cheaper than damages. I think they lost the certification some time later. Newer macOS is not Unix certified.
They still do. But projects like bluefin are striving to get rid of it entirely. Flatpak installation is not package management, they are containerized applications.
The package manager way of delivering distro management, updates and upgrades is an archaic and dumb idea. Doomed to fail since inception and the reason Linux never broke the 1% of users in forever. It’s a bad model.
Atomic and immutable distribution of an OS is the preferred and successful model for the average user who wants a PC to be a tool and not a hobby on itself. I don’t think the traditional package manager will ever go away. But there are alternatives now.
Yeah, that’s absolutely valid. But you run into the same problems again, what the hell is an ostree? Would ask the average gamer. Even some newer changes to bootc will make rpm-ostree unnecessary in the future. Flatpaks are not mandatory even. You could run bluefin or bazzite entirely on appimages.
At least the term cloud native is standardized by the cloud native computing foundation, it has a long story, it’s already known or familiar to a lot of people. And the most important, I think, it is technology agnostic. Even if docker dies and another tech takes its role, or if kubernets are replaced with something else, or even is rpm-ostren is no longer used, cloud native still means the same thing. As for bad smells, that’s just language, words can mean many things at once, we just live with it.
Go with pangolin. You can easily host the control layer either on a cheap vps or your own internet exposed server. Same features as tailscale although with a bit more complexity.