*Edit: I have figured out how to use BTRFS and enable what it calls “transparent file compression”, and I’m going to use that on most of my old storage devices. The only problem I’m having is that I want to use F2FS on my oldest storage device, as BTRFS takes up too much space on the device and I was told by multiple users that F2FS also supports transparent file compression, but I can’t get files to compress and I’m not getting any error messages to try and fix it. Based on what the documentation says, I’m supposed to do something like this:

sudo mkfs.f2fs -f -O extra_attr,inode_checksum,sb_checksum,compression /dev/mmcblk0p1
sudo mount -o compress_algorithm=zstd,compress_extension=* /dev/mmcblk0p1 '/home/j/mountpoint/128mb'
chattr -R +c '/home/j/mountpoint/128mb'

The device will mount like this but files aren’t compressing when added, nor are they compressed if using the last command after they’ve been moved.*

I’m rewriting the old portion for clarification:

In Windows, there’s a file/folder option called “Compress contents to save disk space”. What it does is it compresses the files, as the name suggests, but leaves them accessible as though they aren’t. This doesn’t really have much of a benefit on newer storage devices but on older storage devices, in addition to saving space, it allows files to potentially read faster.

As I have some old storage devices that I want to run games from, I think this will be a great option to have if I could find something similar for Linux. I tried looking online myself but search engines are terrible and I couldn’t find anything though them. So, I decided to post about this here, to see if anyone knows of anything I could try.

  • monovergent@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    13 hours ago

    F2FS seems to do what you want, it’ll reserve the original size of your file but compress what’s actually written. Performance numbers might be massively inflated if your writes don’t saturate the cache in RAM.

    I’ve used BTRFS on SD cards before and it’s mostly fine, but it will struggle massively if over 90% full, or if you have a < 1 GB volume and are, say, frequently updating a handful of files that together take up more than half its capacity. Mostly due to the CoW mechanism, it needs some headroom to make a copy of whatever files are being modified.

    • vortexal@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      13 hours ago

      I figured out what I did wrong but now I’m getting a different error message. It’s saying “Error: Wrong features compress_algorithm=zstd” even though the page I was given says it should be possible. Any ideas on what I should do to fix that?

    • vortexal@lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      13 hours ago

      I’ve done some testing with f2fs and it does seem like what I want to go with but I can’t get compression working. Someone else linked me to this page but for some reason the command on that page gives an error message saying “Error: Failed to get the device stat!”. I already asked another user but assuming I can get it working, how would I change the command they provide to enable zstd compression. From what I’ve read, zstd is the compression method I want to be using.