Four weeks ago, GPT-4 remained the undisputed champion: consistently at the top of every key benchmark, but more importantly the clear winner in terms of “vibes”. Almost everyone investing serious time exploring LLMs agreed that it was the most capable default model for the majority of tasks—and had been for more than a year.
Today that barrier has finally been smashed. We have four new models, all released to the public in the last four weeks, that are benchmarking near or even above GPT-4. And the all-important vibes are good, too!
Those models come from four different vendors.
The problem is open models are nowhere close to stuff like GPT-4. It is going to be a problem for us non-elites.
Of course not, you’d need the same class of hardware running 24/7 to get similar results, and ain’t nobody paying for that.
Agreed but it’s still a good tool thats available. You can use it to summarize large documents. Yes prolly never as capable if you have elite monies. But still worth playing and learning how to use. Imho.
I’ll acknowledge that right now to get model conclusions on par with gpt 4 you are going to need a custom pipeline with multiple adversarial models, RAG and more. But it all could be built by an eager hobbyist with a strong gaming pc
To be clear this approach will not benchmark the same as gpt 4 but can indeed generate useful content.
That’s a no, chief. These Big Tech CIA corpos buy 100000s of Nvidia Tegra A100 GPUs.
Lol mmk