Let’s talk about our experiences working with different models, either known or lesser-known.
Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.
I’d have to say I’m very impressed with WizardLM 30B (the newer one). I run it in GPT4ALL, and even though it is slow the results are quite impressive.
Looking forward to Orca 13b if it ever releases!