• 0 Posts
  • 296 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle

  • I’m a software developer first and a gamer second. Being a “gaming” distro does not detract from anything else, really. It just means that getting proper GPU acceleration is easy, and you’re likely to want that for development too. That was actually why I chose Bazzite. I was tired of wrestling with CUDA and ROCm.

    It’s not “gaming” vs “developing”. That’s a false dichotomy.

    The real choice is immutable vs traditional. And I’ll admit, immutable distros have a big learning curve. But it forces you to learn techniques that will make your life easier no matter where you go. The time I spent wrestling with dependencies on Debian or Ubuntu or OpenSuse just because I didn’t know about Distrobox…

    Unless your needs are very narrow and unchanging, you’re likely to run into something that’s a giant pain in the ass no matter which distro you choose. I used to use Ubuntu LTSR so I could install a few big things in easy mode, but it made everything else harder because it was so outdated. Switched to OpenSuse Tumbleweed and everything was modern but those few vendors don’t support it so I had to wrestle with dependencies.

    The answer to this problem is Distrobox. It’s the answer on Ubuntu, it’s the answer on OpenSuse, and it’s the answer on Bazzite. I’m never going back to dependency hell because I can just run everything the environment it is specifically designed for.

    If you’re wondering “should I use distro X, Y, or Z”, the answer is simply “yes”. :D


  • On bazzite, your search order for apps/packages should be something like:

    1. Flathub
    2. ujust. This is more for general configs than specific apps, but take a look at what it offers.
    3. Homebrew
    4. Distrobox
    5. Podman/Docker images
    6. rpm-ostree

    rpm-ostree is a last resort because it compromises the “atomic” principle of the system, but in a pinch it will give you access to anything you could get with dnf on a regular Fedora install.

    Don’t sleep on Distrobox. I have a Debian box so I can run Signal from its official repo and install Geany with both GUI and CLI support. Once you export applications from distrobox they behave like first-class citizens within your desktop.

    I strongly recommend trying Distrobox. If you instead hop distros, you’re going to find yourself in a similar situation eventually, where something is unreasonably difficult. That’s why Distrobox exists; so you can get the best of all worlds.


  • The actual paper presents the findings differently. To quote:

    Our results clearly indicate that the resolution limit of the eye is higher than broadly assumed in the industry

    They go on to use the iPhone 15 (461ppi) as an example, saying that at 35cm (1.15 feet) it has an effective “pixels per degree” of 65, compared to “individual values as high as 120 ppd” in their human perception measurements. You’d need the equivalent of an iPhone 15 at 850ppi to hit that, which would be a tiny bit over 2160p/UHD.

    Honestly, that seems reasonable to me. It matches my intuition and experience that for smartphones, 8K would be overkill, and 4K is a marginal but noticeable upgrade from 1440p.

    If you’re sitting the average 2.5 meters away from a 44-inch set, a simple Quad HD (QHD) display already packs more detail than your eye can possibly distinguish

    Three paragraphs in and they’ve moved the goalposts from HD (1080p) to 1440p. :/ Anyway, I agree that 2.5 meters is generally too far from a 44" 4K TV. At that distance you should think about stepping up a size or two. Especially if you’re a gamer. You don’t want to deal with tiny UI text.

    It’s also worth noting that for film, contrast is typically not that high, so the difference between resolutions will be less noticeable — if you are comparing videos with similar bitrates. If we’re talking about Netflix or YouTube or whatever, they compress the hell out of their streams, so you will definitely notice the difference if only by virtue of the different bitrates. You’d be much harder-pressed to spot the difference between a 1080p Bluray and a 4K Bluray, because 1080p Blurays already use a sufficiently high bitrate.







  • Thanks for posting the solution!

    If you happen to be using a BTRFS or XFS file system, you might want to try duperemove. It will help you reclaim usable disk space without deleting any files, by using those filesystems’ built-in support for data deduplication and copy-on-write. In other words, it will make duplicate files point to the same data on disk, but still work as individual files. Files will appear and function exactly the same, and editing one copy will not change another (unlike with hard links, for example). That way it won’t interfere with cases like Flatpak or Python virtual environments where you really need multiple copies of the same files.



  • Generally speaking, xz provides higher compression.

    None of these are well optimized for images. Depending on your image format, you might be better off leaving those files alone or converting them to a more modern format like JPEG-XL. Supposedly JPEG-XL can further compress JPEG files with no additional loss of quality, and it also has an efficient lossless mode.

    Do any of them have the ability to recover from a bit flip or at the very least detect with certainty whether the data is corrupted or not when extracting?

    As far as I know, no common compression algorithms feature built-in error correction, nor does tar. This is something you can do with external tools, instead.

    For validation, you can save a hash of the compressed output. md5 is a bad hashing algorithm but it’s still generally fine (and widely used) for this purpose. SHA256 is much more robust if you are worried about dedicated malicious forgery, and not just random corruption.

    Usually, you’d just put hash files alongside your archive files with appropriate names, so you can manually check them later. Note that this will not provide you with information about which parts of the archive are corrupt, only that it is corrupt.

    For error correction, consider par2. Same idea: you give it a file, and it creates a secondary file that can be used alongside the original for error correction later.

    I also want the files to be extractable with just the Linux/Unix standard binutils

    That is a key advantage of this method. Adding a hash file or par file does not change the basic archive, so you don’t need any special tools to work with it.

    You should also consider your file system and media. Some file systems offer built-in error correction. And some media types are less susceptible to corruption than others, either due to physical durability or to baked-in error correction.





  • That can’t be good. But I guess it was inevitable. It never seemed like Arc had a sustainable business model.

    It was obvious from the get-go that their ChatGPT integration was a money pit that would eventually need to be monetized, and…I just don’t see end users paying money for it. They’ve been giving it away for free hoping to get people hooked, I guess, but I know what the ChatGPT API costs and it’s never going to be viable. If they built a local-only backend then maybe. I mean, at least then they wouldn’t have costs that scale with usage.

    For Atlassian, though? Maybe. Their enterprise customers are already paying out the nose. Usage-based pricing is a much easier sell. And they’re entrenched deeply enough to enshittify successfully.




  • Yeah, that’s true for a subset of code. But for others, the hardest parts happen in the brain, not in the files. Writing readable code is very very important, especially when you are working with larger teams. Lots of people cut corners here and elsewhere in coding, though. Including, like, every startup I’ve ever seen.

    There’s a lot of gruntwork in coding, and LLMs are very good at the gruntwork. But coding is also an art and a science and they’re not good at that at high levels (same with visual art and “real” science; think of the code equivalent of seven deformed fingers).

    I don’t mean to hand-wave the problems away. I know that people are going to push the limits far beyond reason, and I know it’s going to lead to monumental fuckups. I know that because it’s been true for my entire career.


  • If I’m verifying anyway, why am I using the LLM?

    Validating output should be much easier than generating it yourself. P≠NP.

    This is especially true in contexts where the LLM provides citations. If the AI is good, then all you need to do is check the citations. (Most AI tools are shit, though; avoid any that can’t provide good, accurate citations when applicable.)

    Consider that all scientific papers go through peer review, and any decent-sized org will have regular code reviews as well.

    From the perspective of a senior software engineer, validating code that could very well be ruinously bad is nothing new. Validation and testing is required whether it was written by an LLM or some dude who spent two weeks at a coding “boot camp”.