I would try a less quantized version. Sometimes inaccurate weights compound and a loop can occur.
Bigger cloud models can do it too when they get perplexed/overwhelmed, or have countering instructions etc. It is possible to detect repeated sentences and ‘snap it out of it’ by inserting a rough/LOUD! order, or even mindful words like ‘caaaalm down… close your eyes and breath slowly…’ into the context. …or both. Feels odd, but the llm just reacts to the human language/intend anyway…






No, not much, but a happy smiling selfie is seriously not a bad idea!
When we smile to our selves once a day in front of a mirror, we tend to mimic that emotion so we can manipulate our own mood in a deliberate feedback. (There are papers on it).
That is hard to habituate for some of us, so having a smiling selfie on your desktop will continuously nudge you towards …a happier mood than otherwise. …in theory at least ;-)