ChatMusician isn’t exactly new and the underlying dataset isn’t particularly diverse, but it’s one of the few models made specifically for classical music.
Are there any others, by the way?
ChatMusician isn’t exactly new and the underlying dataset isn’t particularly diverse, but it’s one of the few models made specifically for classical music.
Are there any others, by the way?
I expected that recording would be the hard part.
I think some of the open-source ones should work if your phone is rooted?
I’ve heard that Google’s phone app can record calls (though it says it aloud when starting the recording). Of course, it wouldn’t work if Google thinks it shouldn’t in your region.
By the way, Bluetooth headphones can have both speakers and a microphone. And Android can’t tell a peripheral device what it should or shouldn’t do with audio streams. Sounds like a fun DIY project if you’re into it, or maybe somebody sells these already.
Haven’t heard of all-in-one solutions, but once you have a recording, whisper.cpp can do the transcription:
The underlying Whisper models are MIT.
Then you can use any LLM inference engine, e.g. llama.cpp, and ask the model of your choice to summarise the transcript:
You can also write a small bash/python script to make the process a bit more automatic.
It would. But it’s a good option when you have computationally heavy tasks and communication is relatively light.
Once configured, Tor Hidden Services also just work (you may need to use some fresh bridges in certain countries if ISPs block Tor there though). You don’t have to trust any specific third party in this case.
Don’t know much of the stochastic parrot debate. Is my position a common one?
In my understanding, current language models don’t have any understanding or reflection, but the probabilistic distributions of the languages that they learn do - at least to some extent. In this sense, there’s some intelligence inherently associated with language itself, and language models are just tools that help us see more aspects of nature than we could earlier, like X-rays or a sonar, except that this part of nature is a bit closer to the world of ideas.
You can get your hands on books3 or any other dataset that was exposed to the public at some point, but large companies have private human-filtered high-quality datasets that perform better. You’re unlikely to have the resources to do the same.
If your CPU isn’t ancient, it’s mostly about memory speed. VRAM is very fast, DDR5 RAM is reasonably fast, swap is slow even on a modern SSD.
8x7B is mixtral, yeah.
Mostly via terminal, yeah. It’s convenient when you’re used to it - I am.
Let’s see, my inference speed now is:
As of quality, I try to avoid quantisation below Q5 or at least Q4. I also don’t see any point in using Q8/f16/f32 - the difference with Q6 is minimal. Other than that, it really depends on the model - for instance, llama-3 8B is smarter than many older 30B+ models.
Have been using llama.cpp, whisper.cpp, Stable Diffusion for a long while (most often the first one). My “hub” is a collection of bash scripts and a ssh server running.
I typically use LLMs for translation, interactive technical troubleshooting, advice on obscure topics, sometimes coding, sometimes mathematics (though local models are mostly terrible for this), sometimes just talking. Also music generation with ChatMusician.
I use the hardware I already have - a 16GB AMD card (using ROCm) and some DDR5 RAM. ROCm might be tricky to set up for various libraries and inference engines, but then it just works. I don’t rent hardware - don’t want any data to leave my machine.
My use isn’t intensive enough to warrant measuring energy costs.
Disabling root login and password auth, using a non-standard port and updating regularly works for me for this exact use case.
I thought MoEs had to be loaded entirely in the (V)RAM and the inference speedup was because you only need to use a fraction of layers to compute the next token (but the choice of layers can be different for each token, so you need them all ready; or keep moving data between the disk <-> RAM <-> VRAM and get reduced performance).
If you’re going to finetune a foundation model, it’d make sense to choose Mistral - once they release a 13B.
Also consider adding function calling to the home assistant use case.
Wizard-Vicuna-30B-Uncensored
works pretty well for most purposes. It feels the smartest of all I’ve tried. Even when it hallucinates, it gives enough to refine the google query on some obscure topic. As usual, hallucinations are also easily counteracted by light non-argumentative gaslighting.
It isn’t very new though. What’s the current SOTA for universal models of similar size? (both foundation and chat-tuned)
This is not just a model upgrade, but the crystallization of wisdom from our research and development team.
So much marketing and no basic information like what dataset was used.
I have a MediaWiki instance on my laptop (I’ve found the features of all other wikis/mindmaps/knowledge databases decisively insufficient after having a taste of MW templates, Semantic MediaWiki and Scribunto).
Also some smaller things like pihole-standalone, Jellyfin and dictd.
LLaMA can’t. Chameleon and similar ones can: