Are there any open models that can actually compete with proprietary ones like GPT 5.5 Extended Thinking or Claude Opus 4.7? I am getting really good results with those in their chat interfaces for coding tasks. They sometimes spend 30-45 minutes working on my task and have an internal container they are doing tool calls on, like cloning a repository and compiling their code, and can find online documentation. Their answers are very good and usually correct for very complex tasks requiring specific protocols.

So I would like to know how well we can replicate this using open models since I want more control over how it runs, and privacy. Do any of you hook in agentic capabilities into your local models? How do you do it, and which models give you good results?

Pretend I have unlimited resources (local llama.cpp, sufficient fast storage/memory, and unlimited time to wait for a good response).

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    7 hours ago

    I suspect you may need to create your own orchestration to achieve the effect you’re after. As I said, I have some ideas…but it’s an engineering proposal, not a drop in replacement.

    I’m actually creating my own micro swarm (literally as I type this; waiting for Codex to finish running smoke tests); I have a feeling if you want “Claude at home”, you’re going to have to uplift something like Qwen 3.6 + swarm + harness.

    I could pass the idea on to you and you could get Claude to chew through it and see what you two could jury rig?

    • hok@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 minutes ago

      Sure, if you have a micro swarm architecture laid out, I would love to hear what it is.