The thing I hate the most about AI and it’s ease of access; the slow, painful death of the hacker soul—brought not by war or scarcity, but by convenience. By buttons. By bots. […]

There was once magic here. There was once madness.

Kids would stay up all night on IRC with bloodshot eyes, trying to render a cube in OpenGL without segfaulting their future. They cared. They would install Gentoo on a toaster just to see if it’d boot. They knew the smell of burnt voltage regulators and the exact line of assembly where Doom hit 10 FPS on their calculator. These were artists. They wrote code like jazz musicians—full of rage, precision, and divine chaos.

Now? We’re building a world where that curiosity gets lobotomized at the door. Some poor bastard—born to be great—is going to get told to “review this AI-generated patchset” for eight hours a day, until all that wonder calcifies into apathy. The terminal will become a spreadsheet. The debugger a coffin.

Unusually well-written piece on the threat AI poses to programming as an art form.

  • plumbercraic@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    28
    ·
    11 months ago

    The thing I hate the most about the printing press and its ease of access: the slow, painful death of the scribe’s soul—brought not by war or scarcity, but by convenience. By type. By machines. […]

    There was once magic here. There was once madness.

    Monks would stay up all night in candlelit scriptoriums with bloodshot eyes, trying to render illuminated manuscripts without smudging their life’s work. They cared. They would mix pigments from crushed beetles just to see if they’d hold. They knew the smell of burnt parchment and the exact angle of quill where their hand would cramp after six hours. These were artists. They wrote letters like master craftsmen—full of devotion, precision, and divine chaos.

    Now? We’re building a world where that devotion gets mechanized at the door. Some poor bastard—born to be great—is going to get told to “review this Gutenberg broadsheet” for eight hours a day, until all that wonder calcifies into apathy. The scriptorium will become a print shop. The quill a lever.

      • plumbercraic@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        15
        ·
        11 months ago

        That’s wildly incorrect and somehow serves to underscore the original point.

        Scribes were not glorified photocopiers; they had to reconcile poorly written and translated sources, do a lot of research on imperfect and incomplete information, try to figure out if the notes in the margin should be included in future transcriptions, etc. Their work required real subject matter expertise, training and technique, was painstaking and excruciating, and many hand written manuscripts are absolutely works of art.

        • floquant@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          9
          ·
          11 months ago

          Then I apologise about my ignorance on the matter, but you’re now making the same point as the author - were you mocking or sharing their perspective?

          There’s a lot that goes behind “work” that you don’t see in the final output. It’s important to care about that art, and a shallow copy is just not the same as “the real thing”. Right?

          • plumbercraic@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            9
            ·
            11 months ago

            Both, I think? Respecting the craft and expertise of the way we used to do things is important, but the author is being melodramatic and I wanted to poke some fun.

          • WanderingThoughts@europe.pub
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            That’s something people have wondered since the beginning of the industrial revolution. Is a mechanically mass produced widget the real thing? People even make fun of the biological locally grown artisanal produced food and the recycled hand made furniture. Shein is quite popular with their fast fashion. Except the rich will have tailor made clothes of course.

      • AdamBomb@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        There are definitely parts of programming that are boring and repetitive. I’ve been using AI to speed that up. I still do the creative parts 100% myself.

    • XeroxCool@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      The slow, painful death of technological privacy - brought not by war, not by scarcity, but by convenience of another app that saves you 3 clicks per transaction paired with the forced usage of certain functions within an existing environment

  • Valmond@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    11 months ago

    I’m more and more distancing myself with computers, it already was “use this library”, then use this app, now it seems just ask the “AI”.

    I took up painting and chess, viable replacements I hope.

      • Valmond@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Yeah same here, I’m working on a “work less, spend less” lifestyle but it’s quite hard for some resaon to convince people to hire you at less than at full time. Personally I think I’d do the same job, or better, in 4 days. 3 days would yield less total work but more per day.

        🤷🏼‍♀️

  • Curious Canid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    ·
    11 months ago

    It amazes me how often I see the argument that people react this way to all tech. To some extent that’s true, but it assumes that all tech turns out to be useful. History is littered with technologies that either didn’t work or didn’t turn out to serve any real purpose. This is why we’re all riding around in giant mono-wheel vehicles and Segways.

  • killeronthecorner@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    11 months ago

    When you outsource the thinking, you outsource the learning.

    Stealing this because it manages to put technical concerns into hand-waving manager speak.

    And a pretty solid article. I think leaning on micro-enhancements to performance a little to hard at the end but the rest jibes with my experiences working in a large company where non-technical bloviators are leading the charge of changing the landscape of a field they don’t understand and have no training in.

    “We’re bringing AI to OKRs!” they say hungrily, as their weak arms attempt to pull the rug.

    “Sure you are”, I say, pretending to stumble.

  • cdkg@lemm.ee
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    11 months ago

    I get when people say: “hey, this happened with every new tech…” But this one in particular gas many inherent problems: it’s built on stolen material, it doesn’t encourage critical thinking and it will create mini socio cognitive bubbles, distancing each other more and more. It’s built that way because the people that makes it want it to be like that.

    Edit: the stolen material includes the way artists executes it’s art, say drawing (ghibli studios for example) or music, not just copyright

      • Doomsider@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        Of all the arguments against AI this is not one of them that holds any water. Copyright is bullshit and AI proved it in a very visceral way.

        Plenty of good reasons to hate AI besides Intellectual Property.

        • AugustWest@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Particularly code. B Gates grabbed code out of the trashcan to learn, then actively tried to kill open source.

          Math, and code in particular, is something you have to work with to learn. The concepts cannot be stolen, the only thing you could is if you copied the whole program excatly and AI does not learn (or at least retain) that way.

        • cdkg@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Its not bullshit when ai use an artist signature work to generate its results. Make it sing or draw exactly like someone. It’s hurting directly the artist, taking him out the equation.

    • cabbage@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      11 months ago

      This is just obviously not the case to anyone who bothers reading it. It’s an original piece of writing.

      The only thing that could hint at AI here is the use of em-dashes, which is a bullshit tell—I use them all the time myself as well. They’re right there for anyone with a compose key on Linux.

      • PlantJam@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        11 months ago

        I’ve noticed people cite em-dashes as concrete proof that something is ai generated, but I’ve seen them be inserted/auto corrected by word plenty of times.

        • XeroxCool@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          11 months ago

          I didn’t know they were illegal to use as a human. I use them often to tack on a related sentence fragment when a technical description is getting too long for the common smartphone user - at least, what I perceive to be too long

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          11 months ago

          Good writers use em-dashes with care and intent. They’re a tool like everything else, and they abound in literature. That said, LLMs do tend to use it every time and everywhere.

  • Quacksalber@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    11 months ago

    That will happen to most “artforms” or jobs that require research. I notice that on myself as well. I now ask an AI for regex stringsor when I want to implement a function I’m unsure about, I ask an AI to see what they are doing first. Critical thinking is still involved, but less than it used to.

    • supersquirrel@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 months ago

      Critical thinking is still involved, but less than it used to.

      Well at least you are honest about it lol

  • Fedditor385@lemmy.worldBanned
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    You can use the same argument for just about everything. “In the past it was better”. Remember when kids new how to actually write with pens, and had to send a letter and wait a few weeks until it even arrived? The damn telephone and internet ‘ruined’ it with their ease of access and convenience.

    • debil@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      11 months ago

      Did you read the article? Because there certainly is no such argument in there.

  • ShortFuse@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 months ago

    This is a trash take.

    I just wrote the ability to take a DX9 game, stealthy convert it to DX9Ex, remap all the incompatibility commands so it works, proxy the swapchain texture, setup a shared handle for that proxy texture, create a DX11 swapchain, read that proxy into DX11, and output it in true, native HDR.

    All with the assistance of CoPilot chat to help make sense of the documentation and CoPilot generation and autocomplete to help setup the code.

    All in one day.

      • ShortFuse@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        The point is to show it’s uncapped, since SDR is just up to 200 not. It’s not tonemapped in the image.

        But, please, continue to argue in bad faith and complete ignorance.

  • Lka1988@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    Hi, we still exist. I still build old shit to do things it’s not supposed to do. We’re not going away.

    • Ex Nummis@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      11 months ago

      Yes, as long as the information you get from the AI is correct. Which we know is absolutely not the case. That is the issue. If AI’s output could be trusted 100% things would be wildly different.

      • jyl@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 months ago

        Unlike vibe coding, asking an LLM how to access some specific thing in a library when you’re not even sure what to look for is a legitimate use case.

        • IsoKiero@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          10
          ·
          11 months ago

          You’re not wrong, but my personal experience is that it can also lead you down in a pretty convincing but totally wrong direction. I’m not a professional coder, but have at least some experience and I’ve tried the LLM approach on trying to figure out which library/command set/whatever I should use for problem at hand. Sometimes it gives useful answers, sometimes it’s totally wrong which is easy to spot and at worst it gives you something which (at least to me) seems like it could work. And on the last case I then spend more or less time figuring out how to use the thing it proposed, fail, eventually read the actual old fashioned documentation and notice that the proposed solution is somewhat related to my problem but totally wrong.

          And on that point I would have actually saved time if I did things the old fashion way (which is getting more and more annoying as search engines get worse and worse). There’s legitimate use cases too of course, but you really need to have at least some idea on what you’re doing to evaluate the answers LLMs give you.

          • jyl@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            11 months ago

            Yeah, I guess that can happen. For me, it has saved much more time than it has wasted, but I’ve only used it on relatively popular libraries with stable apis, and don’t ask for complex things.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 months ago

          Until it gives you a list of books and two thirds don’t exist and the rest aren’t even in the library.

          • jyl@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 months ago

            The worst I’ve got so far hasn’t been hallucinated “books”, but stuff like functions from a previous major version of the api mixed in.

            I’m most of the time on the opposite side of the AI arguments, but I don’t think it’s unreasonable to use an LLM as a documentation search engine. The article itself also points out copilot’s usefulness for similar things, but seems the opinion lost the popular vote here.

        • Ex Nummis@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          I’ve had great success with using ChatGPT to diagnose and solve hardware issues. There’s plenty of legitimate use cases. The problem remains that if you ask it for information about something, the only way to be sure it’s correct is to actually know what you’re asking about. Anyone without at least passing knowledge of the subject will assume the info they get is correct, which will be the case most of the time, but not always. And in fields like security or medicine, such a small issue could easily have dire ramifications.

          • jyl@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            11 months ago

            If you don’t know what the code does, you’re vibe coding. The point is to not waste time searching. Obviously you’re supposed to check the docs yourself, but that’s much less tedious and time consuming than finding it, if the docs are hard to navigate.

    • Dataprolet@lemmy.dbzer0.comBanned
      link
      fedilink
      English
      arrow-up
      10
      ·
      11 months ago

      I’d say both is true. If I need a quick meal I’m glad I can just order something ready-made, but I also enjoy to cook an intricate meal for hours. OP is maybe worried that people forget about the latter and only prefer the ready-made solution.

      • cabbage@piefed.socialOP
        link
        fedilink
        English
        arrow-up
        11
        ·
        11 months ago

        I think chapter 2 does a good job presenting the advantages.

        Maybe you inherited someone else’s codebase. A minefield of nested closures, half-commented hacks, and variable names like d and foo. A mess of complex OOPisms, where you have to traverse 18 files just to follow a single behaviour. You don’t have all day. You need a flyover—an aerial view of the warzone before you land and start disarming traps.

        Ask Copilot: “What’s this code doing?” It won’t be poetry. It won’t necessarily provide a full picture. But it’ll be close enough to orient yourself before diving into the guts.

        So—props where props are due. Copilot is like a greasy, high-functioning but practically poor intern:

        • Great with syntax
        • Surprisingly quick at listing out your blind spots.
        • Good at building scaffolding if you feed it the exact right words.
        • Horrible at nuance.
        • Useless without supervision.
        • Will absolutely kill you in production if left alone for 30 seconds.
    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      So if library users stop communicating with each other and with the library authors, how are library authors gonna know what to do next? Unless you want them to talk to AIs instead of people, too.

      At some point, when we’ve disconnected every human from each other, will we wonder why? Or will we be content with the answer “efficiency”?

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      That was why it was so entertaining, getting a lil homebrew to run on the Nintendo DS was fun.