• wagesj45@kbin.social
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    I have a feeling that this is going to go similarly to Stable Diffusion’s big 2.0 flop. SD put its limits in through training data. Meta put in its limits via terms and conditions. The end result for both will still be that the community gravitates toward what is usable with the most freedom attached to it. The most annoying part of the TOS is that you can’t use the output to improve other models.

    Fuck you Meta, I wanna make a zillion baby specialist models.

  • Naked_Yoga@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I used it and was not impressed… I found Wizard LM to be far superior.

    Also, I agree with @wagesj45 up there about training other models… but how would they detect that you’re training other models with it? I think one of the best things you can do with a large model is to train a small specialist model.

  • noneabove1182@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    People may not love the model or its outputs, but it’s hard to deny the impact to the open-source community that releases like this bring, such a positive bonus and really happy they’re continuing