Memory is the most marketed and least delivered feature in the AI companion space. Most platforms claim to remember you but either reset between sessions or just pull from a profile you filled in manually. After two years of testing the ones that actually carry real conversational context across weeks are rare. Just published a full breakdown of which platforms actually deliver on this versus which ones are just marketing: medium.com/@companaya/nomi-ai-review-2026-is-it-worth-it-tested-c91811dcb24a
Any analytics on token usage?
Cool, but I can’t imagine how compute heavy it is to keep a running log of interactions and constantly include it in the context window and/or RAG. Week to week, sure, but over months to a year is wild, especially if you’re talking to it all day every day.
Soulkyn AI runs a 70B parameter model, the largest underlying language model available in any AI companion platform in 2026.
Llama 3-based stuff is 70B, but it’s not the largest out there. Not even the largest open-weight model. Off the cuff, Behemoth is based on Mistral Large, at 123B.
its so weird that that is now mistrals “medium” size…
weird to see em growing up like this…




