I’m inclined to fine tune/make a Lora for Llama (2? I’m not sure how well it’s supported yet), you definitely won’t want to train from scratch. The big thing seems to be formating the dataset properly, although I haven’t actually done any of this myself yet. this blog seems to have a few good articles about it that you might be interested in reading.
I’m inclined to fine tune/make a Lora for Llama (2? I’m not sure how well it’s supported yet), you definitely won’t want to train from scratch. The big thing seems to be formating the dataset properly, although I haven’t actually done any of this myself yet. this blog seems to have a few good articles about it that you might be interested in reading.