I was indeed setting up nvidia and cuda for ML around 2018 and it was not as straight forward or easy as it is today. It was quite annoying and error prone, at least for me who was setting it up on my own for the first time.
Just a stranger trying things.
I was indeed setting up nvidia and cuda for ML around 2018 and it was not as straight forward or easy as it is today. It was quite annoying and error prone, at least for me who was setting it up on my own for the first time.
The best insight I remember reading about questions as MFA, is to consider the answer as a password. If you use a password manager, don’t feel forced to use actually true answers. The answer doesn’t have to be true, you just need to know it. Use a password manager and invent answers which you store. This is so much more secure than relying on the truth.
Edit: others mention the same thing.
@demigodrick@lemmy.zip
Perhaps of interest? I don’t know how many bots you’re facing.
I feel you are a bit out of touch when the topic is specifically enshittification and that it is based on the history of companies turning against their users, showing little good faith. It is also not something which is sparing open source projects (remember bitwarden’s attempt?). So sure, I’m not going to deny that I’m making assumptions and that I am concerned it may one day happen. But it is grounded in reality, not some tinfoil hat stuff.
Edit: and the fact that bitwarden did not eventually go through with it does not counter the fact that they intended to and tried. Sometimes companies back off and play the long game and try to be more subtle about it.
There is no guarantee headscale can keep working the way it does or that it is allowed to keep existing.
Edit: FYI headscale is not at all at feature parity with what tailscale offers.
Congrats! Amazing project, exciting interface and you went the extra mile on the integration side with third parties. Kudos!
Edit: I’ll definitely have to try it out!
Perhaps give Ramalama a try?
Indeed, Ollama is going a shady route. https://github.com/ggml-org/llama.cpp/pull/11016#issuecomment-2599740463
I started playing with Ramalama (the name is a mouthful) and it works great. There is one or two more steps in the setup but I’ve achieved great performance and the project is making good use of standards (OCI, jinja, unmodified llama.cpp, from what I understand).
Go and check it out, they are compatible with models from HF and Ollama too.
Sorry didn’t mean to sound condescending, but capacitors can indeed output their charge at extremely high rates but have terrible energy storage capacity. You would need an unreasonably large capacitor bank, but it is technically feasible as that’s what the CERN has. But in this case batteries are a more suitable option, they can be tuned between energy and power to fit the exact use case more appropriately.
Capacitors, lol
Isn’t that something you solve with snooze? Like put the alarm for the earlier time, set the snooze time to 15min and hit snooze until you want to wake up?
Remove unused conda packages and caches:
conda clean --all
If you are a Python developer, this can easily be several or tens of GB.
I think it has potential but I would like to see benchmarks to determine how much. The fact that they have 5Gbps Ethernet and TB4 (or was it 5?) is also interesting for clusters.
Would you be able to share more info? I remember reading their issues with docker, but I don’t recall reading about whether or what they switched to. What is it now?
Well, in the case of legacy GPUs you are forced to downgrade drivers. In that case, you can no longer use your recent and legacy GPU simultaneously, if that’s what you were hoping for.
But if you do go the route of legacy drivers, they work fine.
I can’t speak about vulkan, but I had an old GTX 680 from 2012, that has worked without issue until a year back or so. I was able to get it recognized by nvidia-smi.
I had it running using the proprietary drivers, with the instructions from here, using the legacy method: https://rpmfusion.org/Howto/NVIDIA#Legacy_GeForce_600.2F700
Is that what you did?
PS: When I mean working without issue I mean gaming on it using proton.
Deepseek is good at reasoning, qwen is good at programming, but I find llama3.1 8b to be well suited for creativity, writing, translations and other tasks which fall out of the scope of your two models. It’s a decent all arounder. It’s about 4.9GB in q4_K_M.
I think the requested salary amount plays a big role. If a typical 100k annual role was rejected on salary misalignments despite requesting 60k, I would be much more critical of the company.
Regarding photos, and videos specifically:
I know you said you are starting with selfhosting so your question was focusing on that, but I would like to also share my experience with ente which has been working beautifully for my family, partner and myself. They are truly end to end encrypted, with the source code available on github.
They have reasonable prices. If you feel adventurous you can actually also host it yourself. They have advanced search features and face recognition which all run on device (since they can’t access your data) and it works very well. They have great sharing and collaborating features and don’t lock features behind accounts so you can actually gather memories from people on your quota by just sharing a link. You can also have a shared family plan.
It’s on the very first page, opposite to the office server page, and they acknowledge the Author does not exist and that it’s basically an ad for Windows server.