• 23 Posts
  • 556 Comments
Joined 5 years ago
cake
Cake day: May 31st, 2020

help-circle


  • As the other person said, the bit about Arch is just the preamble.
    But you can use Nix Home-Manager on Arch (or other distros), if you’re so inclined, which will give you that reproducibility for the stuff in your home-directory.

    In some ways, this is like backing up and restoring your dotfiles, but it allows you to template those dotfiles and depending on the program, it offers simple ways to populate the dotfile templates. For example, KDE applications don’t generally offer very legible dotfiles and so configuring e.g. a panel via dotfiles is kind of a pain. To help with this, there’s Nix Plasma-Manager.



  • The thing I never understood about PowerShell is that it’s partially more verbose than C#, which is one of the most verbose programming languages in existence. It just feels like you might as well go for a full-fledged programming language at that point.

    The appeal of Bash et al is that the scripting is almost the same as the interactive usage, which you already know. But because PowerShell is so verbose, I’m really not sure people do use it interactively.

    I guess, that code snippet in the article makes somewhat of a difference, in that PowerShell offers better features for interop between processes. But man, that still feels like it could’ve been a library instead…





  • I agree in general, that a crash is much better than silently failing, but well, to give you some of the nuance I’ve already mostly figured out:

    • In a script or CLI, you may never need to move beyond just crashing.
    • In a GUI application or app, a crash may be good (so long as unsaved data can be recovered), but you likely need to collect additional information for what the program was doing when the crash happened.
    • In a backend service, a crash can be problematic when it isn’t actually necessary, since it can be abused for Denial-of-Service attacks. Still infinitely better than failing silently, but yeah, you gotta invest into logging, monitoring and alerting, so you don’t need to crash to make it visible.
    • In a library, you generally don’t want to trigger a crash, unless an irrecoverable error happens, because you don’t know where it’ll be used.

  • Currently implementing error handling for a library I’m building and the process is basically to just throw all of the information I can find into there. It makes the error handling code quite verbose, but there’s no easy way for me to know whether the underlying errors expose that information already, so this is actually easier to deal with. 🫠





  • Ephera@lemmy.mlOPtoLinux@lemmy.mlUnderappreciated `top`
    link
    fedilink
    English
    arrow-up
    16
    ·
    16 days ago

    Yeah, I especially don’t understand it here, because it’s a graphical tool. You don’t have to keep backwards compatibility.

    Even if you’re worried about people depending on the format that’s being piped, you could keep only the piped format stable. We have the technology.


  • Ephera@lemmy.mlOPtoLinux@lemmy.mlUnderappreciated `top`
    link
    fedilink
    English
    arrow-up
    14
    ·
    17 days ago

    Yeah, I would often just grab htop because I had no idea how to read the CPU usage out of top.
    For example, for me it says:

    %Cpu(s):  0,4 us,  0,4 sy,  0,0 ni, 98,8 id,  0,0 wa,  0,3 hi,  0,0 si,  0,0 st
    

    Now that I look at it, I can guess that us and sy are supposed to be user and system time. And I guess id is supposed to be idle.
    I have no guess what the other numbers might be, though. And well, I would often like to see the CPU usage per core.
    Now I know that I can just press 1t and get effectively the same view as in htop.

    I might learn top’s filtering workflow, too. But so far, I always killed processes with ps -ef | grep <process-name> and then kill <pid>, which isn’t particularly more cumbersome, so will see…




  • Ephera@lemmy.mltoProgrammer Humor@lemmy.mlCareer Advice
    link
    fedilink
    English
    arrow-up
    5
    ·
    20 days ago

    Last year, money was running out in our project and the guy who had trained me decided he’d take the L and move to another project, so we could continue in the project. And yeah, suddenly I was in the role of the lead developer.

    Like, don’t get me wrong, I would’ve been the one to be moved to another project, if I wasn’t up for the task. It’s not like I was a complete dumbass.
    But it did still feel more like “I guess, we doin’ lead development now” rather than something I had intentionally worked towards.



  • I can’t really sell this as a solution, as it requires quite a lot more involvement than a simple configuration file should, but I use Nix Home Manager with Plasma Manager for this.

    This is part of the tooling you’d use on NixOS, but you can use it on other distros, too, and it generally works fine (although I’m not sure, if the current version of Plasma Manager still supports Plasma 5, in case you’re still on a distro with that).

    Basically, it allows you to define e.g. keyboard shortcuts like this:

        shortcuts = {
          ksmserver = {
            "Lock Session" = [
              "Screensaver"
              "Meta+Ctrl+Alt+L"
            ];
          };
    
          kwin = {
            "Expose" = "Meta+,";
            "Switch Window Down" = "Meta+J";
            "Switch Window Left" = "Meta+H";
            "Switch Window Right" = "Meta+L";
            "Switch Window Up" = "Meta+K";
          };
        };
    

    It then fucks up the formatting, so that it looks like KDE expects, and throws it into ~/.config/kglobalshortcutsrc.
    (KDE does actually have a text-based config, it’s just borderline unusable.)

    Well, and you can do this with lots of other Plasma options, too. Here’s their official example: https://github.com/nix-community/plasma-manager/blob/trunk/examples/home.nix


  • Been hacking away at a library and definitely feeling this one. Some APIs, I’m not yet terribly happy with, and there’s always this urge to introduce a macro to hide away the ugly API.

    So far, I’m still staying away from it, because I’m just thinking that users will not have an easier time either way. The code they write may look prettier, but if they have to learn custom syntax rules for it, then it isn’t easier to understand in the end.
    Also, while a macro can clean up some rough edges, it won’t fix up an API that’s illogical to begin with. Well, unless you make it a turing-complete macro (i.e. proc_macro rather than macro_rules), but that makes it infinitely harder to understand once more.