According to Clayton, the AI agent involved didn’t take any technical action itself, beyond posting inaccurate technical advice, something a human could have also done.
Producing innaccurate technical advice, with a confident tonse, at scale.
If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.
That sounds sweetly naive. “Producing innaccurate technical advice, with a confident tone, at scale” sounds like the perfect credentials for a career in consultancy.
Producing innaccurate technical advice, with a confident tonse, at scale.
If that LLM were an employee it would get a formal blame, and then demoted or fired as it continues.
Wait til this starts happening in the construction industry.
That sounds sweetly naive. “Producing innaccurate technical advice, with a confident tone, at scale” sounds like the perfect credentials for a career in consultancy.
That’s a good way to represent LLMs. Very bad and very prolific consultants.