Are there any open models that can actually compete with proprietary ones like GPT 5.5 Extended Thinking or Claude Opus 4.7? I am getting really good results with those in their chat interfaces for coding tasks. They sometimes spend 30-45 minutes working on my task and have an internal container they are doing tool calls on, like cloning a repository and compiling their code, and can find online documentation. Their answers are very good and usually correct for very complex tasks requiring specific protocols.
So I would like to know how well we can replicate this using open models since I want more control over how it runs, and privacy. Do any of you hook in agentic capabilities into your local models? How do you do it, and which models give you good results?
Pretend I have unlimited resources (local llama.cpp, sufficient fast storage/memory, and unlimited time to wait for a good response).
If your calibration is Codex and Claude, then the answer is basically ‘none’. We’re not there yet. Qwen 3.6 27B is meant to be amazing for coding, but I cannot vouch for, beyond what I have seen on video / read from others.
Outside of that, if you have the compute, you can run GLM5.1, which IS pretty good for this sort of thing. Try either / both via OpenRouter and test.
I think some of the issues surrounding small LLMs can be routed around using strict gates, checkpoints, edit-one-thing-a-time, sort of approaches. You could even use a cloud model as planner and local model as do-er.
I have a theory of how to address small model as coder issues…but that’s probably a different discussion.
TL;DR: Qwen 3.6 27B is the new hotness…but I vote like to cast a vote for something like https://huggingface.co/allenai/SERA-8B-GA or https://huggingface.co/microsoft/FrogMini-14B-2510 as focused agents, co-ordinated by something else
Thank you for your opinion & recommendations. Something I saw today related to “sub-agents” is in Kimi 2.6’s model card it says
Elevated Agent Swarm: Scaling horizontally to 300 sub-agents executing 4,000 coordinated steps, K2.6 can dynamically decompose tasks into parallel, domain-specialized subtasks, delivering end-to-end outputs from documents to websites to spreadsheets in a single autonomous run.
So maybe Kimi 2.6 is doing the “type of thing” I am looking for, but I don’t have the means to run it practically. Maybe at 1 token per second which would be brutal.
I tried out Qwen 3.6 27B but not yet in an agentic setting, so I can’t really judge yet. Maybe it’s just me but the small model size seems limiting. I thought gpt-oss-120b was good.
I suspect you may need to create your own orchestration to achieve the effect you’re after. As I said, I have some ideas…but it’s an engineering proposal, not a drop in replacement.
I’m actually creating my own micro swarm (literally as I type this; waiting for Codex to finish running smoke tests); I have a feeling if you want “Claude at home”, you’re going to have to uplift something like Qwen 3.6 + swarm + harness.
I could pass the idea on to you and you could get Claude to chew through it and see what you two could jury rig?
I’ve been running Qwen 3.5 122B A10B but recently swapped to Qwen 3.6 35B A3B - both using OpenCode e as my agentic harness (though I’ve also used Pi). I’ve been happy with the output, though I have to be more precise with my prompts and do planning passes.
I’d also love to know but I suspect none are quite there yet.
That’s my problem. None are there yet, at least with my hardware.
If you’ve got 20 grand to spend, there’s a couple of models out there, like the one mentioned above that should do fine.
What I have yet to learn is how much of the intelligence and accuracy comes from the model itself and how much comes from the agentic tool system. For example, my experience with ChatGPT probably would be much worse with the free version (no thinking or container).



