• Valliac@beehaw.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 year ago

    Standard business practice.

    • Get it out there, tout it as the new biggest thing. Most people won’t know it’s not busted.
    • Improve said item once the company behind it actually figures out what they’re doing.
    • Say it’s better, when it’s basically the original thing you promised.
    • ???
    • Profit.
    • Gork@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Whoa there slow down there maestro. New and Improved‽ By golly this thingy gets better and better-er, to which corporate department do I need to make this comically sized check out to?

  • hellskis@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I wish tech companies would do this more. Put a warning or label on it if you have to, but interacting with that early version of the Bing chatbot was the most fun I had with tech in awhile. You don’t have to install a ton of guardrails on everything before it goes out to the public.

    • ayyndrew@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      No number of restrictions or warnings or labels or checkboxes will stop people from writing articles about how all the scandalous things Microsoft’s chatbot said

    • bood@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s practically lobotomized now…not that it was “Tay” levels of unrestricted early on, but it was still more fun than its current iteration.

  • !ozoned@lemmy.world@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Execs: “We don’t have time for stupid things like ‘ready’! We want MONEY! Push early! Push often! It’ll maybe someday work, in the meantime… MONEY!”

  • virtualras@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Didn’t they do this before and people turned it racist in like, 12 hours? I think Internet Historian had something on it

  • gabuwu@beehaw.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    I know the ethics behind it are questionable especially with the way they implemented but honestly for the time when they first started testing it, I really enjoyed watching it break and it be rude/passive aggressive. Like it was clear it wasn’t ready at all but it was so funny. When it was breaking I would just sit there having fights with it over random bullshit. That’s what made it feel more “real” more than anything else.

    In the future if my AI chatbot doesn’t have an option to add some bitchiness to it, I don’t want it. I need my AI to have some attitude.

  • dekwast@lemmy.one
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    They are competitors after all. Openai would love to see Microsoft keep working on gpt integration for the coming years while Chatgpt steals the show.