The moderators of a pro-artificial intelligence Reddit community announced that they have been quietly banning “a bunch of schizoposters” who believe “they’ve made some sort of incredible discovery or created a god or become a god,” highlighting a new type of chatbot-fueled delusion that started getting attention in early May.

  • meme_historian@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    128
    ·
    edit-2
    10 months ago

    At this rate we’ll soon have a decentralized para-religious terrorist organization full of brainlets that got scared shitless after discovering Roko’s Basilisk and are now doing the cyber lord’s bidding in order to not get punished once AGI arrives

    edit: change to non-mobile link

    • Skunk@jlai.lu
      link
      fedilink
      English
      arrow-up
      55
      ·
      10 months ago

      Yeah, there’s been an article shared on lemmy a few months ago about couples or families destroyed by AI.

      Like the husband thinks he discovered some new truth, kinda religious level about how the world is working and stuff. The he becomes an annoying guru and ruins his social life.

      Kind of Qanon people but with chatGPT…

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        42
        ·
        10 months ago

        Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.

        • Bouzou@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          I dunno, I think there’s credence to considering it as a worry.

          Like with an addictive substance: yeah, some people are going to be dangerously susceptible to it, but that doesn’t mean there shouldn’t be any protections in place…

          Now what the protections would be, I’ve got no clue. But I think a blanket, “They’d fall into psychosis anyway” is a little reductive.

          • Pennomi@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            10 months ago

            I don’t think I suggested it wasn’t worrisome, just that it’s expected.

            If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.

            “Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.

            Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.

            • Bouzou@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              10 months ago

              Ah, I see what you’re saying – that’s a great point. It’s designed to be entrancing AND designed to actively try to be more entrancing.

      • Vanth@reddthat.com
        link
        fedilink
        English
        arrow-up
        23
        ·
        10 months ago

        This feels a bit like PTA-driven panic about kids eating Tide Pods when like one person did it. Or razor blades in Halloween candy. Or kids making toilet hooch with their juice boxes. Or the choking game sweeping playgrounds.

        But also, man on internet with no sense of mental health … sounds almost feasible.

        • Pogogunner@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          19
          ·
          10 months ago

          I directly work with one of these people - they admit to spending all of their free time talking to the LLM chatbots.

          On our work forums, I see it’s not uncommon at all. If it makes you feel any better, AI loving is highly correlated with people you shouldn’t ever listen to in the first place.

        • chaosCruiser@futurology.today
          link
          fedilink
          English
          arrow-up
          13
          ·
          10 months ago

          The Internet is a pretty big place. There’s no such thing as an idea that is too stupid. There’s always at least a few people who will turn that idea into a central tenet of their life. It could be too stupid for 99.999% of the population, but that still leaves about 5 000 people who are totally into it.

      • Raltoid@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        ·
        10 months ago

        And that’s not even getting started on “ai girlfriends”, that are isolating vulnerable people to a terrifying degree. And since they are garbage at context, they do things like that case last year where it could seem like it was encouraging a suicidal teen.

  • nagaram@startrek.website
    link
    fedilink
    English
    arrow-up
    42
    ·
    10 months ago

    I think Terry A Davis would have found god in chat GPT and could have figured out the API calls on TempleOS

    • palordrolap@fedia.io
      link
      fedilink
      arrow-up
      26
      ·
      10 months ago

      Hard to say. I feel like it’s about as likely he would have found LLMs to be an overcomplicated false prophet or false god.

      This was a man whose operating system turned a PC into something not unlike an advanced Commodore 64, after all. He liked the simplicity and lack of layers the older computers provided. LLMs are literally layers upon layers of obfuscation and pseudo-neural wiring. That’s not simple or beautiful.

      It might all boil down to whether the inherent randomness of an LLM could be (made to be) sufficiently influenced by a higher power or not. He often treated random number outcomes as the influence of God, and it’s hard to say how seriously he took that on any given day.

      • Carmakazi@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        10 months ago

        I’d imagine it’s a fool’s errand to try and find threads of logic and consistency in the profoundly schizophrenic.

        • Vanilla_PuddinFudge@infosec.pubBanned
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          What Terry enjoyed about computers has been echoed among lots of old heads in the unix world. On the tech front, he was solid.

          It’s the um, finding god in the code part…

        • barsoap@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          The issue is not lack of logic and consistency, the trouble is a completely different reference frame.

          Let me put it, to grossly simplify, in this way: Imagine you’d be dreaming while awake, no way to stop it, and would have to integrate all that craziness in real time. It’s not that dreams make no sense – they all have their rhyme and reason – it’s that they’re talking a completely different language.

          You might be hearing, out of nowhere, a cello note off to the side, move your gaze there, notice “that’s my trashcan that makes no sense”, and then be lost, and panic, lose faith in your senses, and that way lies psychosis. More productively, you say “ok mind which thought with as of yet unformed discernible meaning was it that you wanted me to pay attention to”, look for the place the thought came from (as schizo, you can tell with your kinaesthetic sense), consider it for a while, still being oblivious of the meaning, and then go on with your life.

          We’re weird.

          Oh, back to randomness: It can get you out of a rut and I do suppose that’s how Terry used it, aware of it or not, and framing it however he did. Could also be using it to self-soothe, as in, distracting from a negative spiral. There’s worse habits.

          God, with almost 100% certainty, means “the genome and how it’s speaking to me through my instincts” in his dialect. Because that’s what it always means, what it always meant, for everyone, it meant that when it was the ancestors, it meant that when it became more detailed and became gods, it meant that when people realised all the gods are actually one thing, the theologists are just confused AF because politics and physics and cabbage-heads got into the mix. And so much for my schizo rant. Don’t discount what I say because I’m crazy, the reason you consider me crazy is because it’s true.

  • answersplease77@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    ·
    edit-2
    10 months ago

    “Artifical” Intellegence has already taken over “Social” Media and the internet.

    What I mean by the quotes: We replaced our social interactions with each other with Social Media, which has nothing social about it, then replaced the humans in social media with artificial slop generated by computers guessing what you want to read, watch, or hear.

    Most of Facebook, Insta, Youtube, Reddit, Twitter…etc is AI profiles, AI channels, and AI sloptrash content that give back google-ad revenue money to some russian or indian dude who doen’t even speak english.

  • whaleross@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    10 months ago

    I’ve been trying to configure ChatGPT tell me if I’m wrong in a question or statement but damn it never does unless I keep probing for support or links. I’ve been having the feeling that it has become worse with latter models. Glad but also sad to see I was right.

    Anybody know other LLM that are more “trustworthy”* and capable of searching online for more information?

    Edit; *trustworthy in quotes because of course people will jump on this. I know the limitations of LLM, I don’t need you to tell me how much you hate everything AI. And I know LLM aren’t AI.

    • oldfart@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      Claude 3.7 told me i’m wrong a couple of times. It knows how to search. I don’t have an opinion on 4 yet but it can search too

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 months ago

      Claude definitely has its impressive moments where it calls out something inaccurate.

      It’s also way less sycophantic, mature and better for light coding.

      My only issue is that the servers are sometimes slow and so is the ios app which frequently trows an error after 2 minutes if waiting.

  • shadowfax13@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 months ago

    that sub seems to be fully brigaded by bots from marketing team of closed-ai and preplexity