I’m looking for something more like an traditional Free and Open Source project with an active community, different use-cases…
I tried googling it. But there’s just way to many results these days. And they’re mostly(?) cooked up by some AI agent and tend to get abandoned randomly after a few weeks. Or they have broad claims in a shiny README.md and then I install something and in reality it sucks and doesn’t even do half of it. Or they’re made by lunatics like Peter Steinberger who default to giving their agents root permissions on everything. That’s why I try to avoid that category of projects.
I know I can code everything myself with Python, but it’d be great to have some workflows and integrations laid out for me, memory, RAG, a sandboxed Linux shell, cron, webhooks… So I can just go ahead and connect it to my local LLM and use it for various things. React to my my messages, look up information, read new pull-requests from a repository or RSS feed, write something to a homepage, pipe something into TTS or Ace-Step do a radio show or whatever. Make a small group of agents or my own tools…
Idk, something roughly alike n8n just proper open-source? Is there anything out there you other people use?
I’m asking in the LocalLlama community since I try to run it locally. And I need some amount of customizability so I can create some clever workflows. Something like OpenCode also doesn’t really help if it wastes a million tokens on some mundane task and it’s not really designed to fit with my limited amount of compute resources. Or if it’s super hard to customize it to do so.


You can use Cline with a local AI. It doesn’t work great for enterprise level stuff because the number of tokens quickly swamps my MacBook, but qwen code can easily handle bash/ Python scripts and the like. Then you can use .clinerules to shape the agents but it’s all mostly vibe from there on out
At work, we use a highly structured folder of agent prompts and infrastructure information with Claude. Because it’s so highly structured it feels more like intentional code/config but I couldn’t quantify any improvement metric at this time. There might be, but it would be premature to suggest you’d see any improvement over very be prompts.