I’m interested in creating art for a book, like art that goes alongside the text.

One of the difficult parts of this is that I want characters in the book to have a stable look and not change from image to image.

Is there a way to do this? I have experimented with different localized models and often there were artifacts and I couldn’t get consistent results. I am mildly intelligent with running local models, but I am neither an expert nor a computer genius. I was able to do something like “character is pretty and tall with black hair” but each time that anything was generated, the character would look different.

It’s been about a year since I last tried anything and since then technology has progressed. If I can’t get the characters to look consistent from picture to picture, I’d rather not have images, as I can’t afford an illustrator.

  • nicgentile@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    Eyeballing here. I’m a learner and have hardly used this.

    Would a checkpoint model (right term?) achieve consistency, built from a specific set of pictures?

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 day ago

      If you have or can create a LoRA trained on images of the character you’re presenting, that may be helpful. Or if you have a checkpoint model trained on that character. Would be like having a character that the base model is trained on.

    • Even_Adder@lemmy.dbzer0.comM
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      You might be thinking of a LoRA. LoRA are adapters you use with larger models to help it generate whatever concept you want.