𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍

       🅸 🅰🅼 🆃🅷🅴 🅻🅰🆆. 
 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍 𝖋𝖊𝖆𝖙𝖍𝖊𝖗𝖘𝖙𝖔𝖓𝖊𝖍𝖆𝖚𝖌𝖍 
  • 2 Posts
  • 241 Comments
Joined 2 years ago
cake
Cake day: August 26th, 2022

help-circle

  • There are some excellent apps out there, and by and large they look and work better than commercial apps, IME. So I disagree with the assertion that I have to stay with commercial software.

    What I was asking for, in my post, was not which apps have better UX than Facebook, but rather which of the very many OSS, federated (although, not necessary for my use case), self-hosted platforms fit the specific use, and ideally with a straightforward iOS mobile app. Doesn’t have to be pretty; just has to be able to quickly take and post photos to a private channel/community/wall.

    Circles really is quite nice in all respects. I think they’re hindered by their choice of backend. I’ve been using Matrix for years, and key management has always been a hot mess. I wouldn’t be surprised if the issues we encountered were related to Matrix’s god-awful and buggy PK negotiation & management process.




  • Mine is 3-pronged:

    1. btrfs + snapper takes care of most level-1 situations, and I take a snapshot of every /root change, plus one nightly /home snapshot. but it’s pretty demanding on disk space, and doesn’t handle drive failure; so I also do
    2. restic + USB drive, which I can cram way more snapshots onto, so I keep a couple of weeks of daily snapshots, one monthly snapshot for a year, and one snapshot per year, going back several years. I currently have snapshots from my past 3 computers on one giant drive. However, these drives can also fail, and won’t protect me from burglary or house fire, so I also do
    3. restic + BackBlaze. I just take a nightly snapshot for every computer and VM I manage. My monthly B2 bill is around $10. The VMs don’t change much, and I only snapshot data and config directories (only stuff I can’t spin up fairly quickly in a container, or via a simple install command), so most of the charge comes from a couple of decades of amateur digital photography, and an archive of all our digital music (because I’ll be damned if I’m going to spend weeks re-digitizing all those CDs).

    The only “restore entire system b/c of screwing up the OS” is #1. I could - and probably should, make a whole disk snapshot to a backup drive via #2, but I’m waiting until bcachefs is more mature, then I’ll migrate to that, for the interesting replication options it allows which would make real-time disk replication to slow USB drives practical; I’d only need to snapshot /efi after kernel upgrades, and if I had that set up and a spare NVME on hand, I could probably be back up and running within a half hour.


  • the practice of deliberately wasting enormous amounts of energy for the purpose of being able to prove that you’ve wasted enormous amounts of energy.

    C’mon, that’s being disingenuous. Back when Bitcoin was released, nobody was giving a thought to computer energy use. A consequence of proof-of-work is wasted energy, but a focus on low-power modalities and throttling have been developed in the intervening years. The prevailing paradigm at the time was, “your C/GPU is going to be burning energy anyway, you may as well do something with it.”

    It was a poor design decision, but it wasn’t a malicious one like you make it sound. You may as well accuse the inventors of the internal combustion engine of designing it for the express purpose of creating pollution.




  • Hugo isn’t a server, per se. It’s basically just a template engine. It was originally focused on turning markdown into web pages, with some extra functionality around generating indexes and cross-references that are really what set it apart from just a simple rendering engine. And by now, much of its value is in the huge number of site templates built for Hugo. But what Hugo does is takes some metadata, whatever markdown content you have, and it generates a static web site. You still need a web server pointed at the generated content. You run Hugo on demand to regenerate the site whenever there’s new content (although, there is a “watch” mode, where it’ll watch for changes and regenerate the site in response). It’s a little fancier than that; it doesn’t regenerate content that hasn’t changed. You can have it create whatever output format you want - mine generates both HTML and gmi (Gemini) sites from the same markdown. But that’s it: at its core, it’s a static site template rendering engine.

    It is absolutely suitable for creating a portfolio site. Many of the templates are indeed such. And it’s not hard to make your own templates, if you know the front-end technologies.




  • Sourcehut is for-profit. You pay them to host your data, to provide public access, to run mailng lists, to run CI build servers… you’re paying for the services. But the source code is OSS; you can download and run your own services, all or just a few. The “paying them to host the software for you” isn’t the issue, right? It’s not that someone is charging for hosting and maintenance (and, ultimately, salaries for the people working on the software), but whether or not the software is free, and whether you can self-host.

    I like your point about finding repos. I think it’d behoove all of the bit players to band together to provide one big searchable repo list. Heck, even I, who hates github with a smoldering passion, have enough sense to go there first to search for software; that’s just the nature of a hegemony. The stumbling of the attempt to create a common VCS hosting API (ForgeFed) is lamentable, but getting adoption would have been a uphill battle even without the rumored in-fighting and drama.




  • Docker of one version of software that uses Linux containers to encapsulate software and that software’s dependencies, while limiting that software’s access to the underlying OS. It’s chroot, but for more of the system. It can make running software that has a lot of moving parts and dependencies easier. It can also improve your security running that software.

    For how-tos, watch one of the 875,936 YouTube tutorials, or read one of the 3 million text tutorials. Or ask ChatGPT, if you really need hand-holding.


  • Related, but just hanging this on here. As the default (as installed) security of distributions has improved, so have the amount of headaches when trying to use tools like this increased. For decades, when I’ve had issues like this is not been because of a LAN firewall issue, and so now my first thought is never been, “I should check the firewall,” when it should.

    Sadly, firewall info is almost always locked down so that apps can’t even check by themselves and provide helpful hints to users.

    Anyway, it’s been a hard lesson for me to learn, for some reason. I need to practice my mantra: it’s always the firewall.






  • Your point is a very important one. The numbers have to come up so that manufacturers notice. It might make the difference in a laptop designer choosing a well-Linux-supported wifi chip, instead of a shitty, closed chipset like Broadcom. When the price-per-unit difference is pennies, knowing that you’re potentially losing some thousands of customers in exchange for saving a few cents per unit can make the difference in how you choose.

    It also matters in user choice in the workplace. The more normalized Linux is, the more likely there will be skills in IT support, more mass-management tools, and more willingness to allow employees to choose their OS.

    But where it really matters is in standards. Diversity puts pressure on software developers to use standardized and open data exchange standards. I can’t emphasize enough how important diversity in OSes is to driving creation of, and conformance to, standards, and how much of an anathema to standards monocultures are.

    Even within OSS this is true: github and git have become monocultures; they aren’t standards, they’re tools developers are forced to use if they want to interact with the wider development world in any meaningful way. They’re not bad; git became dominant largely because github used to be so fantastically better than anything else available at the time; but now, their very dominance stiffles diversity and innovation. Want to try the rather exciting pijul, the patch-based spiritual successor to darcs? Fuck you, because you won’t be able to collaborate with anyone, and you repos won’t work with any proglang module systems like cargo or Go modules, because it isn’t git[1]. Monocultures are bad, whether they’re evil corporation software, or FOSS.

    Higher Linux use increases diversity, encourages data format standards, and creates a healthier ecosystem. That’s why these numbers are important.

    [1] Go and Rust’s cargo support more VCSes than git, but they could easily not, and I’m sure the maintainer’s of the vcs code wish they could drop support for some of the long tails - and everything that isn’t git is on the long tail at this point. There are attempts at creating some standards around this; ActivityPub has tossed around ideas, forgefriends has been trying for a breakthrough for years - none of them address the root issue of how tools can access sourcecode efficiently in a way abstracted from the underlying vcs. Any such tool currently must have some bespoke code to speak the network language of the vcs, for every vcs. And since git is the most popular, when faced with the daunting task of supporting N vcses, when N-1 of them are in toto used by a small percent of users, it’s just easier to support only git.