Are there any open models that can actually compete with proprietary ones like GPT 5.5 Extended Thinking or Claude Opus 4.7? I am getting really good results with those in their chat interfaces for coding tasks. They sometimes spend 30-45 minutes working on my task and have an internal container they are doing tool calls on, like cloning a repository and compiling their code, and can find online documentation. Their answers are very good and usually correct for very complex tasks requiring specific protocols.
So I would like to know how well we can replicate this using open models since I want more control over how it runs, and privacy. Do any of you hook in agentic capabilities into your local models? How do you do it, and which models give you good results?
Pretend I have unlimited resources (local llama.cpp, sufficient fast storage/memory, and unlimited time to wait for a good response).


I don’t think my micro swarm architecture applies to your use case (I’m doing some funky stuff with sentence transformers as classifiers; well…it’s a bit more than that but still, not coding related work), but I do have an end-to-end scoped draft for the coder up-lift. I had the basic idea iterated via me-codex-claude-me-codex-claude back and forth. I’ll DM it to you just now; feed it into whatever you use (Opus, Codex, GLM) and see if it suits your purposes, broadly. You’ll likely need to refactor it slightly, as Python is my poison of choice, but the broad brush strokes should hold.
Do me a favour and let me know what the clankers say / if it suits your purposes. And if you decide to build it and become rich, give me a shout-out LOL