• can@sh.itjust.worksOPM
      link
      fedilink
      arrow-up
      58
      ·
      edit-2
      7 days ago

      the alleged attacker, who added that he naturally assumed making the rice dish involved driving several hours to the OpenAI CEO’s residence, especially after the AI chatbot had given him a “pretty decent” sesame chicken recipe the week before.

      • Blander_Rurton@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        7 days ago

        Of course, why wouldn’t it? It’s like one of those steps that the cookbooks leave out because they assume you’ll do it. Preheat the oven, stir till smooth, drive several hours to throw a molotov at Sam Altman’s house, etc.

  • bearboiblake [he/him]@pawb.social
    link
    fedilink
    English
    arrow-up
    45
    ·
    7 days ago

    Being serious though, this cognitive offload issue with LLMs reminds me a lot of when GPS navigation became mainstream, and people would just trust it and decide to take extremely ill-advised “roads” and drive off bridges and into swamps and so on.

    I feel like LLM cognitive offload is going to be an extremely serious issue in the coming years, especially for younger people growing up with this stuff.

    • frank@sopuli.xyz
      link
      fedilink
      arrow-up
      12
      ·
      7 days ago

      And it can be soooooo much more widespread than GPS, with a way more impossible task of becoming reliable.

      Totally agree, it’s pretty worrysome to offload your thinking to an LLM

      • bearboiblake [he/him]@pawb.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 days ago

        No and kinda? It still happens all the time, and it still gets news coverage, but it’s not as widespread news coverage as before, tends to be more in the local/regional news than widely shared (inter-)nationally. I did a quick search and many stories from the last few years.