Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 384 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle

  • Yeah it’ll depend on how good your coreboot implementation is. AFAIK it’s pretty good on Chromebooks because Google whereas a corebooted ThinkPad might have some downsides to it.

    The slowdowns I would attribute to likely bad power management, because ultimately the code runs on the CPU with no involvement with the BIOS unless you call into it, which should be very little.

    Looking up the article seems to confirm:

    The main reason it seems for the Dasharo firmware offering lower performance at times was the Core i5 12400 being tested never exceeded a maximum peak frequency of 4.0GHz while the proprietary BIOS successfully hit the 4.4GHz maximum turbo frequency of the i5-12400. Meanwhile the Dasharo firmware never led to the i5-12400 clocking down to 600MHz on all cores as a minimum frequency during idle but there was a ~974MHz.

    I’d expect System76 laptops to have a smaller performance gap if any since it’s a first-party implementation and it’s in their interest for that stuff to work properly. But I don’t have coreboot computers so I can’t validate, that’s all assumptions.

    That said for a 5% performance loss, I’d say it counts as viable. My games VM has a similar hit vs native. I’ve been gaming on Linux well before Proton and Steam and have taken much larger performance hits before just to avoid closing all my work to reboot for break time games.



  • Yes dual GPU. I set that up like 6 years ago, so its use changed over time. It used to be Windows but now it’s another Linux VM.

    The reason I still use it is it serves as a second seat and is very convenient at that. The GPU’s output is connected to the TV, so the TV gets its own dedicated and independent OS. So my wife can use it when I’m not. When the VM isn’t running I use the card as a render offload, so games get the full power of the better card as well.

    I also use it for toying with macOS and Windows because both of those are basically unusable without some form of 3D acceleration. For Windows I use Looking Glass which makes it feel pretty native performance. I don’t play games in it anymore but I still need to run Visual Studio to build the Windows exes for some projects.

    This week I also used the second card to test out stuff on Bazzite because one if my friends finally made the switch and I need to be able to test things out in it as I have no fucking clue how uBlue works.


  • The BIOS does a lot less than you’d expect, it doesn’t really have an impact on gaming performance. For what it’s worth, I’ve been gaming in a VM for years, and it uses the TianoCore/OVMF/EDK2 firmware, and no issues. Once Linux is booted, it doesn’t really matter all that much. You’re not even allowed to use firmware services after the OS is booted, it’s only meant for bootloaders or simple applications. As long as all the hardware is initialized and configured properly it shouldn’t matter.


  • It’s nicknamed the autohell tools for a reason.

    It’s neat but most of its functionality is completely useless to most people. The autotools are so old I think they even predate Linux itself, so it’s designed for portability between UNIXes of the time, so it checks the compiler’s capabilities and supported features and tries to find paths. That also wildly predate package managers, so they were the official way to install things so there was also a need to make sure to check for dependencies, find dependencies, and all that stuff. Nowadays you might as well just want to write a PKGBUILD if you want to install it, or a Dockerfile. Just no need to check for 99% of the stuff the autotools check. Everything it checks for has probably been standard compiler features for at least the last decade, and the package manager can ensure you have the build dependencies present.

    Ultimately you eventually end up generating a Makefile via M4 macros through that whole process, so the Makefiles that get generated look as good as any other generated Makefiles from the likes of CMake and Meson. So you might as well just go for your hand written Makefile, and use a better tool when it’s time to generate a Makefile.

    (If only c++ build systems caught up to Golang lol)

    At least it’s not node_modules





  • I think it is a circular problem.

    Another example that comes to mind: the sanctions on Huawei and whether Google would be considered to be supplying software because Android is open-source. At the very least any contributions from Huawei is unlikely to be accepted into AOSP. The EU is also becoming problematic with their whole software origin and quality certifications they’re trying to impose.

    This leads to exactly what you said: national forks. In Huawei’s case that’s HarmonyOS.

    I think we need to get back to being anonymous online, as if you’re anonymous nobody knows where you’re from and your contributions should be based solely on its merit. The legal framework just isn’t set up for an environment like the Internet that severely blurs the lines between borders and no clear “this company is supplying this company in the enemy country”.

    Governments can’t control it, and they really hate it.


  • The problem isn’t even where the software is officially based, it can become a problem for individual contributors too.

    PGP for example used to be problematic because US exports control on encryption used to forbid exporting systems capable of strong encryption because the US wanted to be able to break it when it’s used by others. Sending the tarball of the PGP software by an american to the soviets at the time would have been considered treason against the US, let alone letting them contribute to it. Heck, sharing 3D printable gun models with a foreign country can probably be considered supplying weapons like they’re real guns. So even if Linux was based in a more neutral country not subject to US sanctions, the sanctions would make it illegal to use or contribute to it anyway.

    As much as we’d love to believe in the FOSS utopia that transcends nationality, the reality is we all live in real countries with laws that restrict what we can do. Ultimately the Linux maintainers had to do what’s best for the majority of the community, which mostly lives in NATO countries honoring the sanctions against Russia and China.


  • With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:

    • Containers have their own namespace including network namespace, so each container have a blank iptables just for them.
    • For container communication, that goes through the FORWARD table, not the INPUT/OUTPUT ones.
    • Docker adds its own rules to ensure that this works as expected.

    The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.

    When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.


  • The sandboxing is almost always better because it’s an extra layer.

    Even if you gain root inside the container, you’re not necessarily even root on the host. So you have to exploit some software that has a known vulnerable library, trigger that in that single application that uses this particular library version, root or escape the container, and then root the host too.

    The most likely outcome is it messes up your home folder and anything your user have access to, but more likely less.

    Also, something with a known vulnerability doesn’t mean it’s triggerable. If you use say, a zip library and only use it to decompress your own assets, then it doesn’t matter what bugs it has, it will only ever decompress that one known good zip file. It’s only a problem if untrusted files gets involved that you can trick the user in causing them to be opened and trigger the exploit.

    It’s not ideal to have outdated dependencies, but the sandboxing helps a lot, and the fact only a few apps have known vulnerable libraries further reduces the attack surface. You start having to chain a lot of exploits to do anything meaningful, and at that point you target those kind of efforts to bigger more valuable targets.





  • auto rollbacks and easy switching between states.

    That’s the beauty of snapshots, you can boot them. So you just need GRUB to generate the correct menu and you can boot any arbitrary version of your system. On the ZFS side of things there’s zfsbootmenu, but I’m pretty sure I’ve seen it for btrfs too. You don’t even need rsync, you can use ssh $server btrfs send | btrfs recv and it should in theory be faster too (btrfs knows if you only modified one block of a big file).

    and the current r/w system as the part that gets updated.

    That kind of goes against the immutable thing. What I’d do is make a script that mounts a fork of the current snapshot readwrite into a temporary directory, chroot into it, install packages, exit chroot, unmount and then commit those changes as a snapshot. That’s the closest I can think of that’s easy to DIY that’s basically what rpm-ostree install does. It does it differently (daemon that manages hardlinks), but filesystem snapshots basically do the same thing without the extra work.

    However, I think it would be good to use OStree

    I found this, maybe it’ll help: https://ostreedev.github.io/ostree/adapting-existing/

    It looks like the fundamental is the same, temporary directory you run the package manager into and then you commit the changes. So you can probably make it work with Debian if you want to spend the time.


  • All you really have to do for that is mount the partition readonly, and have a designated writable data partition for the rest. That can be as simple as setting it ro in your fstab.

    How you ship updates can take many forms. If you don’t need your distro atomic, you can temporarily remount readwrite, rsync the new version over and make it readonly again. If you want it atomic, there’s the classic A/B scheme (Android, SteamOS), where you just download the image to the inactive partition and then just switch over when it’s ready to boot into. You can also do btrfs/ZFS snapshots, where the current system is forked off a snapshot. On your builder you just make your changes, then take a snapshot, then zfs/btrfs send it as a snapshot to all your other machines and you just boot off that new snapshot (readonly). It’s really not that magic: even Docker, if you dig deep enough, it’s just essentially tarballs being downloaded then extracted each in their own folder, and the layering actually comes from stacking them with overlayfs. What rpm-ostree does, from a quick glance at the docs, is they leverage the immutability and just build a new version of the filesystem using hardlinks and you just switch root to it. If you’ve ever opened an rpm or deb file, it’s just a regular tarball and the contents pretty much maps directly to the filesytem.

    Here’s an Arch package example, but rpm/deb are about the same:

    max-p@desktop /v/c/p/aur> tar -tvf zfs-utils-2.2.6-3-x86_64.pkg.tar.zst 
    -rw-r--r-- root/root    114771 2024-10-13 01:43 .BUILDINFO
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/bash_completion.d/
    -rw-r--r-- root/root     15136 2024-10-13 01:43 etc/bash_completion.d/zfs
    -rw-r--r-- root/root     15136 2024-10-13 01:43 etc/bash_completion.d/zpool
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/default/
    -rw-r--r-- root/root      4392 2024-10-13 01:43 etc/default/zfs
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/zfs/
    -rw-r--r-- root/root       165 2024-10-13 01:43 etc/zfs/vdev_id.conf.alias.example
    -rw-r--r-- root/root       166 2024-10-13 01:43 etc/zfs/vdev_id.conf.multipath.example
    -rw-r--r-- root/root       616 2024-10-13 01:43 etc/zfs/vdev_id.conf.sas_direct.example
    -rw-r--r-- root/root       152 2024-10-13 01:43 etc/zfs/vdev_id.conf.sas_switch.example
    -rw-r--r-- root/root       254 2024-10-13 01:43 etc/zfs/vdev_id.conf.scsi.example
    drwxr-xr-x root/root         0 2024-10-13 01:43 etc/zfs/zed.d/
    ...
    

    It’s beautifully simple. You could for example install ArchLinux without pacman, by mostly just tar -x the individual package files directly to /. All the package manager does is track which file is owned by which package (so it’s easier to remove), and dependency solving so it knows to go pull more stuff or it won’t work, and mirror/download management.

    How you get that set up is all up to you. Packer+Ansible can make you disk images and you can maybe just throw them on a web server and download them and dd them to the inactive partition of an A/B scheme, and that’d be quite distro-agnostic too. You could build the image as a Docker container and export it as a tarball. You can build a chroot. Or a systemd-nspawn instance. You can also just install a VM yourself and set it up to your liking and then just dd the disk image to your computers.

    If you want some information on how SteamOS does it, https://iliana.fyi/blog/build-your-own-steamos-updates/



  • Pop_OS! is about to drop a whole new desktop environment (COSMIC) made from scratch that’s not just a fork of Gnome. Canonical tried that as well a while back with Unity although it was mostly still Gnome with extra Compiz plugins.

    A lot of cool stuff is also either for enterprise uses, or generally under the hood stuff. Simple packages updates can mean someone’s GPU is finally usable. Even that LibreOffice update might mean someone’s annoying bug is finally fixed.

    But yes otherwise distros are mostly there to bundle up and configure the software for you. It’s really just a bunch of software, you can get the exact same experience making your own with LFS. Distros also make some choices like what are the best versions to bundle up as a release, what software and features they’re gonna use. Distros make choices for you like glibc/musl, will it use PulseAudio or PipeWire, and so on. Some distros like Bazzite are all about a specific use case (gamers), and all they do is ship all the latest tweaks and patches so all the handhelds behave correctly and just run the damn games out of the box. You can use regular Fedora but they just have it all good to go for you out of the box. That’s valuable to some people.

    Sometimes not much is going on in open-source so it just makes for boring releases. Also means likely more focus on bug fixes and stability.