There tend to be three AI camps. 1) AI is the greatest thing since sliced bread and will transform the world. 2) AI is the spawn of the Devil and will destroy civilization as we know it. And 3) “Write an A-Level paper on the themes in Shakespeare’s Romeo and Juliet.”

I propose a fourth: AI is now as good as it’s going to get, and that’s neither as good nor as bad as its fans and haters think, and you’re still not going to get an A on your report.

You see, now that people have been using AI for everything and anything, they’re beginning to realize that its results, while fast and sometimes useful, tend to be mediocre.

My take is LLMs can speed up some work, like paraphrasing, but all the time that gets saved is diverted to verifying the output.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    22 days ago

    I don’t think AI is done improving, but companies need to find something other than throwing more compute at it. It seems to get exponentially more expensive for logarithmic gains in performance. I honestly can’t even tell the difference between ChatGPT 4 and 5. I don’t doubt that it is better but I can’t see a difference in my own productivity.

    Time savings vs time sinks depends a lot on exactly what you’re doing. Treading well-worn ground in a new domain can be speedy. But fixing a non-standard or niche (or shitty) code base can be a nightmare because nothing is done the standard way.

    So far, I’ve gained a bit of productivity through AI, but I’ve been down a few rabbit holes, too. Integration tests can be a real pain. It always wants to recommend custom test configurations but then you wind up with a different test environment and you can’t necessarily trust your tests. Date parsing with Jackson in particular can be different between your configured ObjectMapper injected by Spring and new ObjectMapper() in test to give just a super basic example.