there’s already a whole swathe of static analysis tools that are used for these purposes (e.g. Sonarqube, GH code scanning). of course their viability and costs affect who can and does utilise them. whether or not they utilise LLMs I do not know (but I’m guessing probably yes).
Not just a problem for open source, surely? The answer is to use AI to scan contributions for suspicious patterns, no?
And then when those AI also have issues do we use the AI to check the AI for the AI?
Its turtles all the way down.
there’s already a whole swathe of static analysis tools that are used for these purposes (e.g. Sonarqube, GH code scanning). of course their viability and costs affect who can and does utilise them. whether or not they utilise LLMs I do not know (but I’m guessing probably yes).