So similar to kakoune? I tried that for a while, but it was missing some features so I went back to vim/neovim.
I need to know vi anyway, because that is available everywhere (as part of busybox), so using vim/nvim for bigger systems just fits.
So similar to kakoune? I tried that for a while, but it was missing some features so I went back to vim/neovim.
I need to know vi anyway, because that is available everywhere (as part of busybox), so using vim/nvim for bigger systems just fits.
Or you get a message at 3 am the next day, where they apologize that they didn’t made it, but that they will deliver sometime between 8 and 22 that day.
What about Lua/Luajit?
In most scripting languages you have the interpreter binary and the (standard) libraries as separate files. But creating self-extracting executables, that clean up after themselves can easily be done by wrapping them in a shell script.
IMO, if low dependencies and small size is really important, you could also just write your script in a low level compiled language (C, Rust, Zig, …), link it statically (e.g. with musl) and execute that.
I started using Fedora Silverblue on a tablet, seems to work fine so far, but requiring a reboot in order to install new system packages is a bit cumbersome and the process itself takes a while, but ordinary Fedora also doesn’t win any races when asked to install a new package
I think switching to FCOS or Flatcar on servers that just use containers makes sense. Since it lessens the burden of administrating the base system itself. Using butan/ignition might be unusual at first, but it also allows to put the base system configuration into a git repo, and makes initial provisioning using ansible or similar unnecessary. The rest of the system and services can be managed via portainer or similar software.
I also do not have long term experience with FCOS, but the advertised features of auto-update, rolling-release, focus on security and stability makes it a good fit for container servers, IMO.
An alternative to Debian on servers might also be Apline Linux. Which also has more a focus on network devices, but some people use it on a desktop as well.
If you have many different systems, and just want to learn to operate them all, maybe NixOS might be interesting. Using flakes, you can configure multiple machines from just one repo, and share configurations between them. But getting up to speed on NixOS might not be so easy, it has a steep learning curve.
So generally the pro of coreboot is that it is open source, but the con is that it is open source.
What I mean by that, you can fix any issues yourself, however, if you are unable to do it yourself, you have to wait until someone does it for you and often what features are available and stable are a hit and miss.
Compared to proprietary bioses, the company has some kind of standardized process for developing the bios. So you often get want you would expect. However, if the money flow from the pc vendor to the bios vendor drys up, you, or the community of owners. will not be able to fix any issues.
Linux support should be the same, regardless if you choose proprietary or open source bios. But that depends on how well the coreboot was ported to the platform. So officially supported coreboot bioses are likely better than others.
Personally, if all other attributes are equal, would go with coreboot, because I like to support vendors that offer that choice, and IMO a open source solution, that you can review and build yourself is intrinsically more secure than a binary blob, where you have to blindly trust some corporation. But other security minded people might disagree, which is fine.
Check if you find anything about this in the kernel log (dmesg
).
Not the drama itself should influence your judgment, but how they will deal with it.
Whenever people work together on something, there will be some drama, but if they are dealing with it, then that should be fine.
Nix and NixOS are big enough, that even if it fails, there are enough other people that will continue it, maybe under a different name.
Even it that causes a hard fork, which I currently think is unlikely, there are may examples where that worked and resolved itself over time, without too much of burden on the users, meaning there are clear migration processes available: owncloud/nextcloud, Gogs/Gitea/Forgejo, redis/valkey, …
I like RPG games, however I don’t like it when the company has the ability and incentive to bate and switch my game into a worse version after I bought it.
Denuvo forces me to be connected to the internet, which makes playing the game on the move difficult or even impossible. It also allows them to make sure that the most current version is played. MTX means they don’t have incentives to fix the game and instead sell you the fixes, or even enshittyfy it, to squeeze out more money.
This gives me the incentive to wait a couple of years, until the game doesn’t receive any updates anymore, and then decide if the final product is worth it. And hope that I will get a good experience out of it, before the Denuvo activation servers are shut down.
So you have to wait for a few years, in order to know if the gameplay is (and stays) any good.
Nvidia has created a bit of a sore spot for many Linux Developers and thus users. Through their actions and non actions made it impossible to create FOSS drivers for their hardware that work well and are integrated and tested with the rest of the system.
Many fresh users don’t seem to recognize the reason why they are having a sub par experience using their hardware is Nvidia and not the open source community. They often blame and complain to the developers of the open source drivers or applications, who either have to hack around hurdles placed by Nvidia or cannot inspect closed source drivers written by that company.
It is IMO understandable that at some point the community stops providing free and unpaid customer support for hardware and software, they have no control over or don’t even own.
If you would start paying them, then I suspect you might get better answers. Otherwise you just get information about stuff people are excited about.
Only really nice when not CLA is required and every contributor retains their copyright. Ente doesn’t seem to require a CLA.
Otherwise it allows the owner to just take the changes from their contributors and change the license at a later date.
Or other standard archiving formats like WARC.
There also is https://github.com/ArchiveBox/ArchiveBox which looks a bit similar.
Snap is just one case where Ubuntu is annoying.
It is also a commercial distribution. If you ever used a community distribution like Arch, Gentoo or even Debian, then you will notice that they much more encourage participation. You can contribute your ideas and work without requiring to sign any CLAs.
Because Ubuntu wants to control/own parts of the system, they tend to, rather then contributing to existing solutions, create their own, often subpar, software, that requires CLAs. See upstart vs openrc or later systemd, Mir vs Wayland, which they both later adopted anyway, Unity vs Gnome, snap vs flatpak, microk8 vs k3s, bazar vs git or mercurial, … The NIH syndrom is pretty strong in Ubuntu. And even if Ubuntu came first with some of these solutions, the community had to create the alternative because they where controlling it.
I mod my games on my PC and sync it to my SteamDeck. I also sync the save files back and fourth, to continue playing on different devices. Mostly non-steam games.
I also sync my eBook collection to my eink reader with syncthing.
Everything is also mirrored to my always-on NAS, so syncing always works.
Game developers seem to be very afraid to change core features or the story of the game in a major way (even if the actual work would not be too extensive) after release. But there are enough examples where games improved a lot after release.
Sure, the initial impression of the game might be ruined, but that is more a consequence for the producers that most often where responsible for the rushed release, than for the gamers or developers, of the game is fixed afterwards.
Environment variables isn’t a concept of just docker, but general native app programming. So look into the docs of your language.
I am hosting bitwarden myself (on a VPS) and I am not that concered about losing my passwords, because every device syncs all passwords locally regulary so that you don’t need internet to access them.
So to loose all your passwords not only do you have to loose your bitwarden server and all the backups, you also have to loose access to all your bitwarden clients synchroniously.
An interesting concept would be if all hand on the 12 clocks would work, but the hands of the clock in the middle are stuck at 12 position, this way the hands in the middle would point to the clock showing the correct time.