What will happen if the Linux kernel starts having AI generated code in it?

  • JohnEdwa@sopuli.xyz
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    18 minutes ago

    AI code is like alternative medicine, it’s called that when it’s bad and doesn’t work. If it does, it’s just called code. And the issue isn’t using code made by AI, it’s when people who don’t know how to code think the AI does, and blindly do without checking. That’s very unlikely to happen with the Linux kernel, as the entire project is basically just one constant code review where it really doesn’t matter if bad code was written by a human or an AI.

    Even Torvalds has used AI to help with his projects, because it would be kinda silly not to.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 hours ago

    Nothing? I mean an if/else works the same way, no matter if it’s written by a human or an AI or a cat or whatever…

    The Linux kernel developers are opinionated, though. Everything gets quite an amount of scrutiny. There will be several people having their eyes on submissions. They’re looking for security vulnerabilities. They’re adamant on maintainability. Have a standard on how to phrase things, indent lines… Send in the patches… They generally have high standards. I mean if someone submits some AI slop, there’s a high chance it just gets declined and they’re getting scolded for doing it.

    There’s of course always the chance someone tries to sneak something in. Or it creeps in on its own. But it’s the same for bugs or security attacks. And maybe some of the devs work for companies who push AI and they’ll do silly things. But the Linux community is pretty strong. They’ll find a way to handle it. And maybe in the far future, AI will get as good as human programmers and there won’t be an issue accepting AI code, because it has the same quality as human code. But that’s science fiction as of now.

  • Riskable@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    The assumption here is that the AI-generated code wasn’t reviewed and polished before submission. I’ve written stuff with AI and sometimes it does a fantastic job. Other times it generates code that’s so bad it’s a horror show.

    Over time, it’s getting a little bit better. Years from now it’ll be at that 99% “good enough” threshold and no one will care that code was AI-generated anymore.

    The key is that code is code: As long as someone is manually reviewing and testing it, you can save a great deal of time and produce good results. It’s useful.

      • draco_aeneus@mander.xyz
        link
        fedilink
        arrow-up
        2
        ·
        1 hour ago

        We cannot know, in the same way we cannot know that it doesn’t contain code that is hand-written on graph paper and scanned in via OCR.

        The standards for code submissions for the kernel are extremely high, and their review process very strict and complete. There are no barriers stopping LLM generated code from entering the code base, but the barrier of entry for the code quality itself is so high that you have to submit code at the quality of a seasoned and competent engineer.

        Ultimately, does it matter that the code was LLM written if the quality is sufficiently high?

        • village604@adultswim.fan
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          Exactly. AI generated code is only a bad thing if it’s blindly pushed to production without any sort of review. A lot of the use of AI in coding is to do the simple mundane work that an entry level dev could do.