California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.

If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.

Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    14 hours ago

    As predicted, Mike Masnick is the author. Mike has a conflict of interest when it comes to reporting on platforms’ responsibilities, because he’s on the board for Bluesky… The social media company.

    And he’s trying to argue that chatbots are good for mental health actually. Never mind healthcare, he praises chatbots.

    Yet chatbots have emerged as first aid for people experiencing mental health issues, providing genuine benefit to those who aren’t in crisis but are not OK either. Heavy-handed legislation risks derailing this breakthrough in support, creating more problems than it solves.

    The proof? Self-reports. Including people who use the Replika Girlfriend-bot.

    At this point, I consider anything on Mike’s website that’s related to social media to be compromised, and this is yet another example of that disappointing pattern.

    The comments in the article are actually pretty good. Like this one.

    I love how on Techdirt, when it comes to LLMs, the entire concept of product liability just goes right out the window. If this were a physical object that, ha ha, occasionally convinced people to commit suicide or murder, or spiral off into other delusions, it’d be off the shelves in a heartbeat, no matter how useful some people thought it was, and the manufacturer would be rightly sued into the ground. But according to Techdirt, because it’s software, it is now and forever a permanent and untouchable part of the internet landscape and regulating it is impossible and undesirable.

    I’m (cautiously) interested in the concept of built-for-purpose chatbots being used therapeutically, although I expect the providers to fail horribly at not abusing the massive trove of personal data they’ll gain access to. But if a corporation can’t produce a general purpose chatbot that won’t help people kill themselves, they have no intrinsic right to just dump it on the internet and say “it’s not our fault.” If that’s a bet they want to make, then they need to accept that they’re going to take their lumps.

    Mike himself even jumps into the comments section to complain that long time fans don’t like his new direction. It’s pretty funny.