You’ll have to be more specific with how anthropic is debuging Firefox. There’s many sort of possible setups. In general though, you’ll need
an llm model file
some openai compatible server, eg lmstudio, llama.cpp, ollama.
some sort of client to that server there’s a myriad of options here. OpenCode is the most like Claude. But there’s also more modular programmatic clients, which might suit a long term task
the Firefox source code and/or an MCP server via some plugin.
You’ll also need to know which models your hardware can run. “Smarter” models require more ram. Models can run on both CPUs and GPUs but they run way faster on the GPU, if they fit in the VRAM.
You’ll have to be more specific with how anthropic is debuging Firefox. There’s many sort of possible setups. In general though, you’ll need
You’ll also need to know which models your hardware can run. “Smarter” models require more ram. Models can run on both CPUs and GPUs but they run way faster on the GPU, if they fit in the VRAM.