Just tried the new open-source 20b parameter OpenAPI gpt-oss model on my laptop. Here’s what I got.

Have no idea why it needed to generate code for a multi-threaded Fibonacci calculator! Funny part is it did all that, then just loaded the original multiplication request into Python and printed out the result.

On the plus side, the performance was pretty decent.

  • panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 days ago

    I wonder if they updated the tool prompt in LM Studio

    In case you don’t know, if you use tools LMS injects a bunch of shitty prompt that you can’t change, and it has specific examples like fetching the weather in Paris.

    I have not checked that prompt since they added a deno based js sandbox, but I could imagine them adding a 1 shot example of a Fibonacci generator.

    • fubarx@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Mystery solved!

      In a separate chat thread I had asked for a function that calculates fibonacci numbers in multithreaded python. Looks like the conversation memory was leaking into this one. Also, that one was addressed to a Qwen instance, and this one was to gpt-oss-20b.

      I cleared all chats, started a fresh one, and switched to gpt-oss-20b and it responded properly without creating a separate python instance.

    • fubarx@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Thanks for the tip! Will check it out when back at desk. A bunch of us have been scratching our heads over this one.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Yeah I can’t check at the moment, but it has been an issue for me for a while.

        I was very confused why it would fetch the weather for Paris randomly with some models, and it was even more confusing because I was learning MCP by writing a weather tool (so the LLMs were getting conflicting instructions).