Even_Adder@lemmy.dbzer0.comM to Stable Diffusion@lemmy.dbzer0.comEnglish · 10 months agoStable Diffusion 3 Medium Fine-tuning Tutorial — Stability AIstability.aiexternal-linkmessage-square8fedilinkarrow-up113arrow-down11
arrow-up112arrow-down1external-linkStable Diffusion 3 Medium Fine-tuning Tutorial — Stability AIstability.aiEven_Adder@lemmy.dbzer0.comM to Stable Diffusion@lemmy.dbzer0.comEnglish · 10 months agomessage-square8fedilink
minus-squareclb92@feddit.dklinkfedilinkEnglisharrow-up1·edit-210 months agoPeople have been training great Flux LoRAs for a while now, haven’t they? Is a LoRA not a finetune, or have I misunderstood something?
minus-squareEven_Adder@lemmy.dbzer0.comOPMlinkfedilinkEnglisharrow-up1arrow-down1·10 months agoLast I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.
minus-squareclb92@feddit.dklinkfedilinkEnglisharrow-up2·10 months agoOh well, in practice I’ll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊
minus-squareerenkoylu@lemmy.mllinkfedilinkEnglisharrow-up1arrow-down1·edit-210 months agoquite the opposite. Lora’s are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).
People have been training great Flux LoRAs for a while now, haven’t they? Is a LoRA not a finetune, or have I misunderstood something?
Last I heard, LoRAs cause catastrophic forgetting in the model, and full fine-tuning doesn’t really work.
Oh well, in practice I’ll just continue to enjoy this (possibly forgetful and not-fully-finetunable) model then, that still gives me amazing results 😊
quite the opposite. Lora’s are very effective against catastrophic forgetting, and full finetuning is very dangerous (but also much more powerful).