Just tried the new open-source 20b parameter OpenAPI gpt-oss model on my laptop. Here’s what I got.
Have no idea why it needed to generate code for a multi-threaded Fibonacci calculator! Funny part is it did all that, then just loaded the original multiplication request into Python and printed out the result.
On the plus side, the performance was pretty decent.
Mystery solved!
In a separate chat thread I had asked for a function that calculates fibonacci numbers in multithreaded python. Looks like the conversation memory was leaking into this one. Also, that one was addressed to a Qwen instance, and this one was to gpt-oss-20b.
I cleared all chats, started a fresh one, and switched to gpt-oss-20b and it responded properly without creating a separate python instance.