Hey, my setup works for me! Just add an option to enable CPU overheating in the next update!
Hey, my setup works for me! Just add an option to enable CPU overheating in the next update!
Same, I thought it was used commonly too.
I like Ardour. Unfa on YouTube made a great tutorial on how to use it.
It isn’t misusing metric, it just simply isn’t metric at all.
single master text file
Sounds like something you are using to manage your packages to me…
Stop giving them ideas!
IANAL but it looks like they are violating Apache 2, as they are supposed to retain the license and mark any changes.
Sure. If you are using an nvidia optimus laptop, you should also add __NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia at the start of the last line when running in hybrid mode to run mpv on the dgpu. You should have a file at ~/.wallpaperrc that contains wallpaper_playlist: /path/to/mpv/playlist
. You may want to add this script to your startup sequence via your wm/de.
#!/bin/sh
WALLPAPER_PLAYLIST=$(cat ~/.wallpaperrc | grep -v '^\w*#' | grep 'wallpaper_playlist' | sed "s/wallpaper_playlist: //")
xwinwrap -g 1920x1080 -ov -- mpv -wid WID --no-osc --no-audio --loop-playlist --shuffle --playlist=$WALLPAPER_PLAYLIST
Hope this helps!
I set mpv as the root window which worked well. I stopped using it a while back, but if you are interested, I could dig up the simple script for you (literally one or two lines iirc).
Wow, CUPS is way better than I previously thought and I thought it was amazing!
If I’m being honest, it is fairly slow. It takes a good few seconds to respond on a 6800XT using the medium vram option. But that is the price to pay to running ai locally. Of course, a cluster should drastically improve the speed of the model.
You can run llms on text-generation-ui such as open llama and gpt2. It is very similar to the stable diffusion web ui.
It is just how I prefer to do my computing. I tend to live on the command line and pipe programs together to get complex behavior. If you don’t like that, then my approach is not for you and that’s fine. As for your analogy, I see it more as “instead of driving down the road in a car, I like to put my own car together using prefabs”.
Option 4: levy existing tools such as gpg and git using something like pass. That way, you are keeping things simple but it requires more technical knowledge. Depending on your threat model, you may want to invest in a hardware security key such as a yubikey which works well with both gpg and ssh.
Yes, definitely. My biggest use is transparent filesystem compression, so I completely agree!
Well when using zstd, you tar first, something like tar -I zstd -cf my_tar.tar.zst my_files/*
. You almost never call zstd directly and always use some kind of wrapper.
That threw me for a loop!
I already do ml on amd, and it works great. There’s usually a few extra steps that need doing as binaries aren’t always available, but that, too, will improve with time.
I think children go in dictionaries so you can look them to via name (key).