I’m the developer of the Photon client. Try it out
thanks to the beauty of mixture-of-experte models i can shove a q2 quant of this into my 8gb vram


The beauty of open source is that they can also just notify the developer about the problem. I had no idea this was missing until I came across this post
is ts 4d ragebait or huh
Nativoids desperately trying to install a broken dependency for a package by compiling the dependency from source, but it itself needs 2 another dependency versions not in the package manager repos, so you finally dump a random prebuilt binary from sourceforge that secretly will beam all of your login tokens straight to netanyahu himself
Flatpak is a great comment ragebait source. Nativoids really be letting an image viewer access the entire filesystem and network stack


This does sound like it was written by an off the shelf LLM. You can’t just rely on em dashes anymore, most LLMs don’t spam those anymore.
When you tell a modern LLM to write a post like this, it’ll use a very LinkedIn-esque tone. It’ll spam short, active sentences, often preceded by a colon:
Document your setup. Write guides. Make it easier for the next person. Run services for friends and family, not just yourself. Contribute to projects that build this infrastructure. Support municipal and community network alternatives.
“Not this, but that” and the “rule of 3” are getting less useful as tells, but they are absolutely littered everywhere in this post.
When you run Nextcloud, you’re not just protecting your files from Google - you’re creating a node in a network they can’t access.
I quote this formatting as a joke for obvious LLM writing. I’ve never seen human writing with more than 3 of these in a single post.
My guess is that this was written by Claude since it stays rather personally neutral if you don’t guide it that way.


internet explorer bro


Thank god we have a photo of the firefox logo instead of an article


deleted by creator


internet explorer bro


Yo ill take some of your megabytes ill pay for shipping


PSA Ollama is just a slightly more convenient llama.cpp wrapper with a few controversies. if you can spare the extra effort, try using llama.cpp itself. it has a lot more granularity in terms of control, is faster, and has supported Vulkan and many other backends for a while now


Rather ironic given that wikipedia makes a massive portion of LLM training corpus


This isn’t actually using a vision LLM, it’s using a CLIP model. This image comes from an OpenAI blog from 2019 I think


4/6 bait
Call me cringe but this all works in Fish which is what I primarily use


I encounter VPN blocks everywhere frequently. I usually just reroll my selected server until the block goes away


As a memory-poor user (hence the 8gb vram card), I consider Q8+ to be is higher precision, Q4-Q5 is mid-low precision (what i typically use), and below that is low precision


It’s a webp animation. Maybe your client doesn’t display it right, i’ll replace it with a gif
Regarding your other question, I tend to see better results with higher params + lower precision, versus low params + higher precision. That’s just based on “vibes” though, I haven’t done any real testing. Based on what I’ve seen, Q4 is the lowest safe quantization, and beyond that, the performance really starts to drop off. unfortunately even at 1 bit quantization I can’t run GLM 4.6 on my system
i swear 97% of openclaw hype is bots