GPT4 with reflexion prompting gets 90% correct (for HumanEval coding benchmark). The paper this is based on is misleading at best.
GPT4 with reflexion prompting gets 90% correct (for HumanEval coding benchmark). The paper this is based on is misleading at best.
Yeah. They buried it in there (and for some of their experiments just said “ChatGPT” which could mean either), but they used 3.5 and oddly enough, 3.5 gets 48% on HumanEval.
It’s a pretty important fact since there’s a huge difference between 3.5 and 4. Mentioning it once in one place is not great, plus they also just mention ChatGPT without specifying 3.5 or 4 earlier in that paragraph. The problem I have is this has led to press (and hence many other people) thinking ChatGPT is terrible at coding when in fact using the GPT 4 version, it’s actually pretty decent.