Inspired by a recent talk from Richard Stallman.

From Slashdot:

Speaking about AI, Stallman warned that “nowadays, people often use the term artificial intelligence for things that aren’t intelligent at all…” He makes a point of calling large language models “generators” because “They generate text and they don’t understand really what that text means.” (And they also make mistakes “without batting a virtual eyelash. So you can’t trust anything that they generate.”) Stallman says “Every time you call them AI, you are endorsing the claim that they are intelligent and they’re not. So let’s let’s refuse to do that.”

Sometimes I think that even though we are in a “FuckAI” community, we’re still helping the “AI” companies by tacitly agreeing that their LLMs and image generators are in fact “AI” when they’re not. It’s similar to how the people saying “AI will destroy humanity” give an outsized aura to LLMs that they don’t deserve.

Personally I like the term “generators” and will make an effort to use it, but I’m curious to hear everyone else’s thoughts.

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    15 hours ago

    The term “Artificial Intelligence” has historically been used by computer scientists to refer to any “decision making” program of any complexity, even something extremely simple, like solving a maze by following the left wall.

    • Oxysis/Oxy
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      I like calling them regurgitative idiots, or artificial idiots, though really anything that makes fun of them works

  • supersquirrel@sopuli.xyz
    link
    fedilink
    arrow-up
    23
    ·
    2 days ago

    No

    Exhibit A people are beginning to describe empty, hollow mass produced corporate slop as AI, it has become an adjective to describe worthless trash and I love it.

  • BananaOnionJuice@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    12
    ·
    2 days ago

    Yes we say Fuck AI, but when we see it in the wild we call it slop, bot, clanker, or vibe coded, etc.

    And starting splitting hairs about naming is very geeky but it doesn’t help, as 90% of people have very little concept about what AI or LLM’s are in the first place.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      90% of people have very little concept about what AI or LLM’s are in the first place.

      Yeah I mean I agree, I think that’s why there needs to be a term that describes them.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      But even so- surely you don’t believe that Generative AI programs and Hal 9000 are functionally identical? I just think it would be helpful to have a word that doesn’t lump those things together.

    • sustainable@feddit.org
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      Well, according to the broad definition, a Google search or recommendation systems like those on Netflix or Instagram would also be considered AI. And we don’t call them that, but rather by their proper name.
      And language shouldn’t be underestimated. It has a profound impact on our thinking, feeling, and actions. Many people associate AI with intelligence and “human thinking”. That alone is enough to mislead many, because the usefulness of the technology in a given application is no longer questioned. After all, it’s “intelligent”. However, when “LLM” is used, a lot more people wouldn’t grant it intelligence or one might be more inclined to ask whether a language model, for example in Excel, is truly useful. After all, that’s exactly what it is: a model of our language. Not more, not less.

        • technocrit@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          Why would any honest person agree to promote this kind of garbage disinformation? There is nothing “intelligent” about a search engine or any other computer program. These lies are a huge part of the problem.

          Just because a lie has become a “definition” to some people that doesn’t make it the truth.

          • x1gma@lemmy.world
            link
            fedilink
            arrow-up
            11
            ·
            2 days ago

            It is not a lie but a widely accepted and agreed on definition that precedes LLMs by years, and had been created by people way smarter then you and I combined, and who have spent more time in AI research than most people here.

            An LLM is an ANI (artificial narrow intelligence), and any ANI is an AI, the broader term for any artificial intelligence. An ANI operates not on intelligence as a human intelligence, its intelligence is a set of rules. A search engine algorithm is a set of rules. Your phone’s keyboard is a set of rules. T9 typing on your old Nokia is a set of rules and can be classified as an ANI. An LLM has rules how it spits out the next token.

            There is no universal definition of AI, because we would need to have a universal definition of human intelligence for that first. Since there is no single universal definition, it’s free for you to disagree on that definition. But calling it disinformation, that no computer program is intelligent, or a lie is simply wrong.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      3
      ·
      2 days ago

      AI has a very broad definition.

      Even more so… It has no definition. It’s fake. It’s a phony term used by grifters. It’s not helpful at all to encourage them and participate.

  • Lumidaub@feddit.org
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    It’s just a word. It’s more important to let people know what this is about and any terms that may be more “accurate” won’t do that.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      I never liked the “just a word” defense. If any word can be made to mean anything else just because a government or corporation says so, what does that say for our shared reality?

      • Lumidaub@feddit.org
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        Sure, I don’t disagree. But what use is any of that if people don’t know what exactly you are protesting? At what point do you abandon idealism in favour of pragmatism?

        • James R Kirk@startrek.websiteOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          It seems most people don’t know what exactly is being protested now. And it seems to me that perpetuating a false narrative for the sake of convenience is helping those we claim to be against.

            • James R Kirk@startrek.websiteOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 hours ago

              Based on historical examples of situations when a false narrative was perpetuated for the sake of convenience, and it assisted the person or group who benefited from the perpetuation of the false narrative.

              • Lumidaub@feddit.org
                link
                fedilink
                arrow-up
                3
                ·
                2 days ago

                Gaming NPC behaviour decision trees that are even the slightest bit sophisticated have been called “AI” for decades. Nobody complained about a “false narrative” and nobody thought NPCs in games were like Data. It’s just a word.

              • its_kim_love
                link
                fedilink
                arrow-up
                2
                ·
                2 days ago

                I thought we left all that purity testing bullshit in the 2010s. We all hate AI. We haven’t been convinced this issue in particular is the hill worth dying on.

      • illi@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I get where you are coming from, but ultimately words and their meanings are social constructs. Words mean what society determine they mean.

        If you need to distinguish them, we can use the already coined name of LLMs as that’s what they actually are. Maybe let’s use Large Image Models for those non-text ones? I feel like “generators” is too generic a term to work.

        But I do agree that calling them AI gives them more power than they have.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      It’s just a lie. What’s the point of telling the truth? It’s more important for people to be immersed in grifter disinformation than to accurately use words. \s

      • Lumidaub@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        It’s the word we use for that technology. A “meme” in today’s sense has little to nothing to do with “memes” in the early internet sense and certainly the original concept as defined by Dawkins and I could cry and yell about that injustice all day long or I could use words that other people understand if I want to convey anything.

        Once they are interested in what you have to say, because they were curious about “why would anyone be anti-AI?”, you can then very easily educate them as to more precise terminology. You won’t get even to that point if they’re thinking “anti-large-language-model-generated slop? never heard of that before, don’t care.”

  • myedition8@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    This is why I call chatbots “LLMs” and refer to image and video generators as “slop generators”. It isn’t AI, a software can’t be intelligent.

  • x1gma@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    I disagree with this post and with Stallman.

    LLMs are AI. What people are actually confused about is what AI is and what the difference between AI and AGI is.

    There is no universal definition for AI, but multiple definitions which are mostly very similar: AI is the ability of a software system to perform tasks that typically would involve human intelligence like learning, problem solving, decision making, etc. Since the basic idea is basically that artificial intelligence imitates human intelligence, we would need a universal definition of human intelligence - which we don’t have.

    Since this definition is rather broad, there is an additional classification: ANI, artificial narrow intelligence, or weak AI, is an intelligence inferior to human intelligence, which operates purely rule-based and for specific, narrow use cases. This is what LLMs, self-driving cars, assistants like Siri or Alexa fall into. AGI, artificial general intelligence, or strong AI, is an intelligence equal to or comparable to human intelligence, which operates autonomously, based on its perception and knowledge. It can transfer past knowledge to new situations, and learn. It’s a theoretical construct, that we have not achieved yet, and no one knows when or if we will even achieve that, and unfortunately also one of the first things people think about when AI is mentioned. ASI, artificial super intelligence, is basically an AGI but with an intelligence that is superior to a human in all aspects. It’s basically the apex predator of all AI, it’s better, smarter, faster in anything than a human could ever be. Even more theoretical.

    Saying LLMs are not AI is plain wrong, and if our goal is a realistic, proper way of working with AI, we shouldn’t be doing the same as the tech bros.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      If I’m reading correctly it sounds like you do agree with Stallman’s main point that a casual distinction is needed, you just disagree on the word itself (“ANI” vs “generator”).

      • x1gma@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        No, I think the distinction is already made and there are words for that. Adding additional terms like “generators” or “pretend intelligence” does not help in creating clarity. In my opinion, the current definitions/classifications are enough. I get Stallman’s point, and his definition of intelligence seems to be different from how I would define intelligence, which is probably the main disagreement.

        I definitely would call a LLM intelligent. Even though it does not understand the context like a human could do, it is intelligent enough to create an answer that is correct. Doing this by basically pure stochastics is pretty intelligent in my books. My car’s driving assistant, even if it’s not fully self driving, is pretty damn intelligent and understands the situation I’m in, adapting speed, understanding signs, reacting to what other drivers do. I definitely would call that intelligent. Is it human-like intelligence? Absolutely not. But for this specific, narrow use-case it does work pretty damn good.

        His main point seems to be breaking the hype, but I do not think that it will or can be achieved like that. This will not convince the tech bros or investors. People who are simply uninformed, will not understand an even more abstract concept.

        In my opinion, we should educate people more on where the hype is actually coming from: NVIDIA. Personally, I hate Jensen Huang, but he’s been doing a terrific job as a CEO for NVIDIA, unfortunately. They’ve positioned themselves as a hardware supplier and infrastructure layer for the core component for AI, and are investing/partnering widely into AI providers, hyperscalers, other component suppliers in a circle of cashflow. Any investment they do, they get back multiplied, which also boosts all other related entities. The only thing that went “10x” as promised by AI is NVIDIA stock. They are bringing capex to a whole new level currently.

        And that’s what we should be discussing more, instead of clinging to words. Every word that any company claims about AI should automatically be assumed to be a lie, especially for any AI claim from any hyperscaler, AI provider, hardware supplier, and especially-especially from NVIDIA. Every single claim they do directly relates to revenue. Every positive claim is revenue. Every negative word is loss. In this circle of money they are running - we’re talking about thousands of billions USD. People have done way worse, for way less money.

    • III@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Can you share the prompt you gave to ChatGPT to get this, I have questions and I want to cut out the middle man.

      • x1gma@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Feel free to ask your questions, I’ll gladly answer them. Before making stupid and smug claims, maybe you should’ve ran my post through literally any AI text detector before embarrassing yourself.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    I support Stallman’s take. I think just saying “Fuck AI” is going to have almost zero effect on the world. I think we need to add nuance, reasoning, be accurate… Tell people WHY that is, so we can educate them. Or convince them to do something… Understand how these things work and why that’s good or bad to form an opinion… “Fuck AI” alone isn’t going to do any of that.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Shaping online discourse doesn’t matter, not on the scale we can effect as individual users. Billionaires shape online discourse with their algorithms and bots, what the fuck are you going to do to fight that? If you even begin to possibly threaten them they just deplatform you. That’s why we’re all on a niche subforum on a niche website.

          If you want to do something that matters you have to log off.

          • moonshadow@slrpnk.net
            link
            fedilink
            arrow-up
            1
            ·
            2 days ago

            Billionaires shape online discourse with their algorithms and bots, what the fuck are you going to do to fight that?

            Crazy idea… fight that? It really doesn’t seem like you’re having fun either though man, maybe take your own advice

      • TrickDacy@lemmy.worldM
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        There’s nothing no one person can do unless they’re very influential, but in aggregate, our real world thought patterns on a societal level are mostly dominated by online discourse.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        I guess sometimes I’m wrong here. I’ll usually try to have a positive effect on the world. And do something about the things I perceive as wrong. Also it’s not really “fun” to me to discuss ludicrous RAM prices, burning of money, bad effects on the environment… I think that’s more a serious matter.

        I get what you’re saying, though.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          Forums aren’t serious business, nothing important ever happens here.

          Use forums to learn and be exposed to new ideas, to socialize, to obtain skills, to be informed, but remember that nothing you say on the internet really matters that much. If you aren’t doing this for fun, what’s even the point? You’re just wasting time.

          • James R Kirk@startrek.websiteOP
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            nothing important ever happens here

            Unrelated but here is list of things I find to be some of the most important activities I can think of:

            learn

            be exposed to new ideas

            socialize

            obtain skills

            be informed

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            Idk. The internet is a tool, I guess? I use forums to get my computer problems solved. Help other people with their woes… I talk to random strangers and learn something about their perspective on the world. Or what it’s like in a remote place… Talk about relationship issues. In the old days I’d use them to coordinate activities, projects. Sell used stuff or buy old hardware…

            I mean you’re probably right, With social media, a lot of places lost meaning and it’s more memes and random noise. But I’d argue that’s not what the internet is about. Specifically internet forums.

            But we’re all free to use them however we like. I’m not the Grinch, having fun is a perfectly valid thing to do, and should be part of the equation 😉

            Ultimately I like to think I’m not just confined to armchair activism. I’ll mix online activities, real-world activism. I’ll do projects. Our hacker groups helped avoid Chatcontrol and their online actividies have an impact on people’s lives… Stallman changed the world… It’s a thing people can do if they like.

  • Darkcoffee@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    ·
    2 days ago

    “Slop Constructors” is what I call them. It’s good to remember that calling them “AI” helps with the fake hype.

  • aaaa@piefed.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    The term “AI” has been used for decades to refer to a broad spectrum of things, often times including algorithms that had nothing to do with machine learning or inference.

    Technically, what most of us have a problem with isn’t “AI” as a whole, but just LLMs and how companies are trying to replace people with them. I agree that people should be specific, as there’s a lot of practical application for machine learning and AI that has nothing to do with LLMs.

    But you’re not going to get anywhere by trying to change the words people use for these things. We saw a similar thing happen with “smart” home automation devices, and before that it was people complaining about “smartphones” not being actually “smart”. But both of those terms are still in common use.

    I don’t think you’ll convince anyone by trying to police the terminology for technical accuracy. The focus should be on the specific problems and harmful effects of how the technology is being used

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      It changes the argument away from the objective of ethics and safety, and towards the words being used. One can use the inaccurate wording while debating its characteristics and problems. It’s far too late to control what marketing and public ignorance have set. I wasn’t a fan of the AI slop" term, as it’s morphed into a general word or use for dismissing something that’s not liked or agreed on, nowhere near the original narrow meaning. But it’s a word that is now used all the time, and that’s how words are created and become authentic, by usage.

      The issue of ethics is still important, even though fixing it is far in the past. We still have to have the discussion. The issue of safety in general for AI is something that has been shelved by both sides, and even though it’s primarily an AGI topic, it still applies to even non-intelligence LLMs and other systems. If we don’t focus on it, it’s a dead end for us. It doesn’t have to be Terminator-like to be bad for civilization, it doesn’t even have to be aware. “Dumb” AI is maybe even worse for us, and yet it’s been embraced as something more.

      But if the argument is about what we call it and not what’s actually happening, nothing will be solved. One can refer to it as AI in a discussion and also talk about its actual defining functions (LLM and so forth). It might even make the point stronger instead of deflecting to what it’s called.

    • technocrit@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      But both of those terms are still in common use… by lying grifters, their collaborators, and their victims.

      Personally I don’t use either of these bullshit terms in earnest. I’m not a bullshitter.

  • pewgar_seemsimandroid
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    someone said to call it “computer rendered anonymized plagiarism” so i have that in my clipboard.

  • pinball_wizard@lemmy.zip
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    Standard disclaimer: I do not want to grow up to be like Stallman.

    That said, every time I have thought that Stallman was too pedantic about terminology and the risks involved, I have been wrong, so far.

    • James R Kirk@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      He’s a good barometer to check in with and guage how far we’ve strayed from a lot of the idealism of the 1980s. Someone has to keep the flame alive.