Let’s talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

  • actually-a-cat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    The wizard-vicuna family is my favorite, they successfully combine lucidity with creativity. Wizard-vicuna-30b is competitive with guanaco-65b in most cases while being subjectively more fun. I hope we get a 65b version, or a Falcon 40B one

    I’ve been generally unimpressed with models advertised as good for storytelling or roleplay, they tend to be incoherent. It’s much easier to get wizard-vicuna to write fluent prose than it is to get one of those to stop mixing up characters or rules. I think there might be some sort of poison pill in the Pygmalion dataset, it’s the common factor in all the models that didn’t work well for me.