• Kissaki@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    admitted to sabotaging their company’s AI by entering proprietary info into public AI chatbots, using unapproved AI tools, or intentionally using low-quality AI output in their work without fixing it.

    Are the first two really sabotaging AI initiatives? The output is still the same.

    The first sounds like a security and data use issue to me. The second sounds like users may look for better tools because the provided tools are lacking - which is not sabotage. The third is the only one clearly indicating sabotage to me. (Reasonable malicious compliance under presumably bad requirements and pressure.)