This doesn’t mean LLMs aren’t useful, this is just a funny example and the title is from the YT video.

  • eleijeep@piefed.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    20 days ago

    This doesn’t mean LLMs aren’t useful

    This is precisely why LLM’s aren’t useful. Stop apologising to the trillion dollar corporations and their army of AI-brained sycophants just for pointing out the obvious: that the emperor has no clothes.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      19 days ago

      LLM is very useful, because the Emperor has given me quite a few emergency calls to borrow my shirts-and-shorts at short notice, for a massively inflated fee.

    • lobut@lemmy.caOP
      link
      fedilink
      arrow-up
      2
      ·
      19 days ago

      Yeah, I work in tech and the number of times I hear that I’ll have no job soon is too high. Also, my peers are just so proud about outsourcing their thinking. Not to mention the juniors just plugging stuff into AI and when asked, just forward the question to AI with an AI response.

      I get too “middle-ground” too quickly with this stuff due to just how much pushback I get.

      • wizblizz@lemmy.worldM
        link
        fedilink
        English
        arrow-up
        2
        ·
        19 days ago

        I hear you, and tech is a really uncertain place to be right now. If there’s any place to express a ‘fuck this bullshit’ sentiment, it’s here!

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    20 days ago

    This is proof of why OpenAI’s… opaqueness is so dangerous.

    Chat LLMs tend to treat everything like an exam question or essay prompt, as a direct consequence of how the base models are finetuned. The hand is like a pivot point in a physic problem. But more importantly:

    • The chat context is sort of their whole world. Again, due to training format. So they tend to stubbornly adhere to what has already been said, and have no real means of self correction.

    • While we have no idea what OpenAI actually does, in basically every other open model, the vision component is trained separately from pure text input. Point being these models are alright at the very specific set of vision tasks they’re trained for, but the “coupling” of image input to the bulk of the LLM is very weak. The reasoning they can do over text does not carry over well.


    Point I’m trying to make is the biggest lie of Altman is pitching ChatGPT as a general intelligence… It’s not. It’s a dumb, narrow tool, like a drill with specific changeable bits. But they package and market it like it’s “smart”, which is a big fat lie.

    Go to any of the smaller AI vendors/models (like Minimax, with a new model today) and they do the opposite of this. They show specific uses in specific harnesses, and hyper optimize for that.