

That’s not quite the same - that gives you the appearance of being a local device, which is enough to fool the restriction.
Their policy and technology enforcement is to charge for remote access, not relaying.
That’s not quite the same - that gives you the appearance of being a local device, which is enough to fool the restriction.
Their policy and technology enforcement is to charge for remote access, not relaying.
They charge for remote access whether it’s through their relay service or not, and you can’t opt out of fallback to their relay service.
There is an official UI for it now: https://ollama.com/blog/new-app
The client is open source and can be administered using the open source Headscale server. I use it with Keycloak as an auth gateway.
It is! It’s a port of OpenSSH. The server has been ported as well, but requires installation as a “Windows Feature”.
Windows now has an SSH client built in.
Getting Keycloak and Headscale working together.
But I did it after three weeks.
I captured my efforts in a set of interdependent Ansible roles so I never have to do it again.
It would be extremely barebones, but you can do something like this with Pandoc.
That I agree with. Microsoft drafted the recommendation to use it for local networks, and Apple ignored it or co-opted it for mDNS.
Macs aren’t the only thing that use mDNS, either. I have a host monitoring solution that I wrote that uses it.
Yeah, that’s why I started using .lan.
I was using .local, but it ran into too many conflicts with an mDNS service I host and vice versa. I switched to .lan, but I’m certainly not going to switch to .internal unless another conflict surfaces.
I’ve also developed a host-monitoring solution that uses mDNS, so I’m not about to break my own software. 😅
Coincidentally, I just found this other thread that mentions EasyEffects: https://programming.dev/post/17612973
You might be able to use a virtual device to get it working for your use case.
I just wanted to update this to mention that there are a lot of custom low level performance improvements for CPU based inferencing in Llamafile: https://justine.lol/matmul/
It’s just a different use case to create a single-file large language model engine that automatically chooses the “best” parameters to run under. It uses llama.cpp under the hood.
The intent is to make it as easy as double clicking a binary to get up and running.
It depends on the model you run. Mistral, Gemma, or Phi are great for a majority of devices, even with CPU or integrated graphics inference.
I’m also going to push forward Tilda, which has been my preferred one for a while due to how minimal the UI is.
We all mess up! I hope that helps - let me know if you see improvements!
I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
Thank you! That is exactly my point.