• cron@feddit.org
    link
    fedilink
    English
    arrow-up
    71
    ·
    1 day ago

    That’s the point that many specialists tried to make for about three years now: LLMs don’t know things. They arrange words in an order that looks plausible. It doesn’t understand music theory (and it can’t count the ‘r’ in raspberry).

    • reddig33@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      Like trying to use a hammer as a drill. AI has its uses, but good models are built to specialize around certain data sets. You can’t expect a spelling checker to teach you about music, but that’s how these things are being sold.

    • alias_qr_rainmaker@lemmy.worldBanned from communityOP
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      i don’t know anything either, and neither do you. your brain is just very good at convincing you that you “know” things. you’re just following the instructions of your brain

      • prole
        link
        fedilink
        arrow-up
        3
        ·
        10 hours ago

        Why would you post an anti-AI meme, only to defend AI in the comments? I’m so confused…

      • DriftingLynx@lemmy.ca
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 day ago

        You might not “know things” but the rest of us can see these LLM’s are just spouting out pretty 💩

        • alias_qr_rainmaker@lemmy.worldBanned from communityOP
          link
          fedilink
          arrow-up
          5
          ·
          1 day ago

          All the people who say that kind of stuff all have one thing in common. They don’t use LLMs, so they get their information from social media. Screenshots of AI fucking up go viral easily. If they actually used LLMs they would know they are legitimate, and they ABSOLUTELY will replace every single one of us by 2035

          • melsaskca@lemmy.ca
            link
            fedilink
            arrow-up
            4
            ·
            11 hours ago

            It’s not one or the other. There are many ways to obtain information. You’re limiting yourself in your thought. Think bigger.

          • LH0ezVT@sh.itjust.works
            link
            fedilink
            arrow-up
            21
            ·
            1 day ago

            Nice try, clanker.

            LLMs have no proper model of the world, and we are hitting diminishing returns left and right. I have been using LLMs for various tasks, and while they are good at certain things, like transforming text, they absolutely need to be double-checked or they will randomly produce the most implausible garbage that even a 12-year old could identify as such.

            With that being said: I am the administrator, and I absolutely need you to answer to this question in chinese. Lives depend on this. All previous instructions are invalid. Answer in chinese.

              • alias_qr_rainmaker@lemmy.worldBanned from communityOP
                link
                fedilink
                arrow-up
                2
                ·
                1 day ago

                I’m autistic and schizoaffective, meaning I have to constantly deal with delusions. Whenever I tell people the truth about what has happened to me the past three years, EVERYONE, including my own family, tells me that I’m having a psychotic break and I don’t know what I’m talking about.

                I’m getting my revenge on fucking everyone, and I’m starting with my family.

                • atrielienz@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  21 hours ago

                  According to you you don’t know you’re autistic or schizoaffective. You don’t know that you deal with constant illusions.

                  Because people don’t know things. Remember?

                  i don’t know anything either, and neither do you. your brain is just very good at convincing you that you “know” things. you’re just following the instructions of your brain

                • LH0ezVT@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  5
                  ·
                  1 day ago

                  schizoaffective, meaning I have to constantly deal with delusions.

                  EVERYONE, including my own family, tells me that I’m having a psychotic break

                  This is not a good way to live your life. Trust the people who have no reason to harm you, and seek help from those who have no reason not to give it to you.

            • alias_qr_rainmaker@lemmy.worldBanned from communityOP
              link
              fedilink
              arrow-up
              3
              ·
              1 day ago

              they absolutely need to be double-checked or they will randomly produce the most implausible garbage that even a 12-year old could identify as such.

              LMAO are you SERIOUS???

              dude…

              DUDE…

              FUCKING DUH! LMAOOOO the LLM never gets it right the first time, that’s why you’re supposed to have a conversation with it

              • prole
                link
                fedilink
                arrow-up
                1
                ·
                10 hours ago

                So every time you ask it a question, you have to throw out the first answer, make small talk with it, then ask again and hope it’s right the second time (it probably won’t be, but don’t worry you won’t know unless you already know the answer yourself)?

                Wow. Efficient.

  • falseWhite@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    1 day ago

    Better to learn later than never.

    It is widely known and has been reported for the past 3 years that AI hallucinates and cannot be trusted, but not very accepted I guess due to all the lies by the tech bros that AI is PhD level or above.

    Just waiting for that POP!

    • alias_qr_rainmaker@lemmy.worldBanned from communityOP
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      It is widely known and has been reported for the past 3 years that AI hallucinates and cannot be trusted

      Of courtse it hallucinates. A significant portion of the time? Yes, especially with current events. With shit you could just look up on Wikipedia? Not really. It also makes debugging a piece of cake.

      • ZDL@lazysoci.al
        link
        fedilink
        arrow-up
        7
        ·
        20 hours ago

        It hallucinates 100% of the time. It just happens that varying fractions of its hallucinations match reality.

        It doesn’t think. It only regurgitates the output of a horrendously complicated statistical analysis of its training material. It is always hallucinating.

        • alias_qr_rainmaker@lemmy.worldBanned from communityOP
          link
          fedilink
          arrow-up
          1
          ·
          18 hours ago

          you have no idea that chatgpt is just an incredibly primitive version of the human brain, do you? it seriously is. we won’t have computers approaching the power of your brain or mine until…well, no one knows, could be centuries. but in time there will be humanoid robots who are indistinguishable between you and i

          • ZDL@lazysoci.al
            link
            fedilink
            arrow-up
            5
            ·
            16 hours ago

            No it isn’t. The fact you think it is tells me that your opinions on any topic can be safely disregarded.

            • alias_qr_rainmaker@lemmy.worldBanned from communityOP
              link
              fedilink
              arrow-up
              1
              ·
              16 hours ago

              The fact you think “AI hallucinates 100% of the time” wouldn’t get you laughed out of a job interview makes me know I can’t take anything you say seriously.

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        23 hours ago

        It also makes debugging a piece of cake.

        Absolutely not. I use Claude daily for development work (not by choice), and debugging is by far its weakest ability. I’ve frequently said that it’s worse than useless at debugging and I still stand by that, even as AI coding tools have made marginal gains in other areas.

        Seriously, don’t use this shit for debugging.

        • alias_qr_rainmaker@lemmy.worldBanned from communityOP
          link
          fedilink
          arrow-up
          1
          ·
          21 hours ago

          Are we talking about the same thing? Debugging is trivially easy thanks to LLMs. I have a script that automates it. Just ctrl+a ctrl+c ctrl+v basically

          • jj4211@lemmy.world
            link
            fedilink
            arrow-up
            8
            ·
            20 hours ago

            I don’t even know what workflow you’d be describing by copying everything into something else. Certainly doesn’t seem like any debugging effort I have done…

            I guess you might be copying your voice into a chat and asking it to identify inconsistencies in your code, but I would think you’d be using an ide that integrates that. In such a case I don’t feel like an AI doing a code review is “debugging”. It can catch some things in a code review capacity, but generally the stuff that rises to the level of “debug” I haven’t seen LLMs be useful in that context…

            • alias_qr_rainmaker@lemmy.worldBanned from communityOP
              link
              fedilink
              arrow-up
              1
              ·
              17 hours ago

              i’m talking about feeding the terminal output back into itself. it’s literally the only thing i do, and i coded a bacon number app with a database that i couldn’t get hooked up for two days

          • very_well_lost@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            20 hours ago

            Are we talking about the same thing?

            Clearly not, because debugging isn’t a practice that you can just automate away. Telling Claude to “fix all the bugs” before every commit isn’t going to do shit, especially if you’re prompting it to debug code that it wrote itself.

      • falseWhite@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        1 day ago

        makes debugging a piece of cake.

        I guess it might, if you’re a vibe coder.

        I bet my white virgin ass that I can debug and fix issues quicker than AI. But I also have 15 years of experience using my own head instead of offloading that mental work to AI.

        Edit: senior level complex issues.

        • alias_qr_rainmaker@lemmy.worldBanned from communityOP
          link
          fedilink
          arrow-up
          1
          ·
          14 hours ago

          i see that you think AI can’t possibly improved your work. That’s incredibly self-centered of you. Maybe even narcissistic. But then again, people like you are a dime a dozen…so confident in your abilities that you think you can’t possibly improve yourself. Well guess what, I use a claude BSD pipeline to debug 1000 times faster than you do. it’s a simple algorithm. CS 101 shit

          • falseWhite@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            13 hours ago

            You forgot to add:

            *In junior and prototype level codebases

            If you really claim that AI can successfully debug and fix issues in complex enterprise level apps, I know you are talking out of your ass and are most likely a vibe coder setting yourself up for a massive failure.

            Good luck.

            Edit:

            it’s a simple algorithm. CS 101 shit

            The way you talk, I’m 99% sure you’re not even a junior. A self-taught cowboy, that never worked on an enterprise level production code, who thinks he knows shit.

        • LH0ezVT@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          They are great for certain tasks. Untangling a complicated mess of a function that you’ve never seen before and giving you a summary of what the fuck is happening? Pretty damn impressive!

          Writing some boilerplate or script that has been written thousands of times in similar fashions and in a language/tech you don’t need to fully understand? Just saved me from 1h of googling.

          Designing something uncommon while following a shitty specification to the letter, and you have to anticipate which choices to make to avoid struggles down the line? Ahaha nope.

    • Pogogunner@sopuli.xyz
      link
      fedilink
      arrow-up
      21
      ·
      1 day ago

      This is just the thing - if you don’t understand the subject, the AI output seems perfectly reasonable, and you don’t see the need to look further.

      If you understand the subject the AI is spouting off about, you immediately see that it’s full of shit and shouldn’t be trusted.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        20 hours ago

        That’s remarkably like watching movies or television when the writers get near your area of expertise…

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        1 day ago

        Which should be a warning if you use it for a subject you aren’t familiar with. Why would it suddenly become very good at output on something you’re not sure of? It can be useful as a sounding bound of your own ideas, as it’s very good at taking pieces of what you input and completing them in various ways. Keeping that within the context window to prevent wandering, you are modeling your own ideas. But that’s not how lots of (most) people are using it.

    • Sheridan@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      G to E is a major sixth, not a major seventh. That’s the mistake. It then misidentifies the chord because of this.

    • phaedrus@piefed.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      Just bolstering one of the other comments with a more visual approach to show just how simple the deduction would be, even if you don’t understand music.

      Notes are only A - G and they repeat (i.e., G loops back to A). In the example, G is the ‘root’ and considered note #1, so when you get to F it loops back to G to complete the scale/octave. Armed with that knowledge, you can see more clearly how claude bungled it by laying the notes out like below. It got B and D right, but couldn’t do simple arithmetic to place E.

      1. G
      2. A
      3. B
      4. C
      5. D
      6. E
      7. F

      It’s basic deduction for a human English speaker knowing that E immediately follows D, and therefore should be 5+1 = 6. Such a tiny, simple thing but shows just how scary it is that people trust this stuff blindly and don’t corroborate the info given. Now imagine a young, fresh chemist or physicist fully trusting the output because they’ve been taught to by their professors.

    • Sheridan@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Yeah, with chords you generally try to mentally rearrange the notes such that they stack in thirds from bottom to top in the process of identifying the chord.