Sadly, it seems like Lemmy is going to integrate LLM code going forward: https://github.com/LemmyNet/lemmy/issues/6385 If you comment on the issue, please try to make sure it’s a productive and thoughtful comment and not pure hate brigading.

Consider upvoting the issue to show community interest.

Edit: perhaps I should also mention this one here as a similar discussion: https://github.com/sashiko-dev/sashiko/issues/31 This one concerns the Linux kernel. I hope you’ll forgive me this slight tangent, but more eyes could benefit this one too.

  • BlameTheAntifa@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    1 day ago

    I’m sure many here know how militantly anti-AI I am, but things like “intellisense” have been around for decades. If someone is using an ML model for code prediction and they actively write code with their own fingers, I don’t see it as much different than earlier code hinting systems. That’s far different from allowing an AI to perform a task autonomously, like an Avian Intelligence.

    • ell1e@leminal.spaceOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Problem is, LLM code prediction will likely plagiarize too. Some argue “it’s too short I can’t get sued”, but even if that were universally true (don’t know, IANAL) that still leaves the ethics and morals of seemingly stealing some lines hook and sinker with every punctuation bit and intricancy from GPL code bases, without attribution.

      Some simply think that’s bad for FOSS, notwithstanding other ways LLMs seem to harms FOSS.

      (And oldschool “IntelliSense” is semantics based and doesn’t do that.)

  • Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    45
    ·
    2 days ago

    Code written with the help of LLM and being reviewed is different than like what was happening with Lutris where the developer decided to obfuscate their use of AI-generated code.

    The approach you suggest to totally ban it, while in principle can agree and I think that’s noble, it could lead to people accusing each other of using AI code where it may or may not have happened, or others just hiding it and trying to submit anyway without the reviewers knowing, which is just counter-productive.

    I’ve followed Lemmy development now for 3 years, the devs approach is slow and steady, to a fault in some people’s views. I think it’s a better use of open source resources if we encourage candor and honesty. If the repo gets spammed with AI-generated PRs, then it will probably be blanket banned, but contributors accurately documenting and reporting their usage of AI will help direct reviewers attention to ensure the code is not slop quality or full of hallucinations.

    • ell1e@leminal.spaceOP
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      2 days ago

      In my opinion, this argument is exactly the same as saying “we can’t enforce people not stealing GPL-licensed code and copy&pasting it into our project, so we might as well allow it and ask them to disclose it.”

      You can try to argue AI may actually be useful, which seems like what they did, and that would more fairly inform a policy in my opinion. I think your argument doesn’t.

      • MrLLM@ani.social
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        Yeah, and of top of that all the reasons why we hate AI,

        • It’s a plagiarism machine
        • It still hallucinates which might end up in borked projects
        • it has and will continue to fuck up RAM and storage market
        • It consumes a shit ton of energy
        • It’s ruining everything with poor quality products
      • Rentlar@lemmy.ca
        link
        fedilink
        arrow-up
        5
        ·
        1 day ago

        My argument is that a total ban on AI use is more comparable to saying “Code from any other coding project is not allowed”. It will start unproductive arguments over boilerplate, struct definitions and other commonly used code.

        The broadness and vaagueness of “no AI whatsoever” or “no code from any other projects whatsoever” will be more confusing than saying, “if you do copy any code from another project, let us know where from”. Then the PR can be evaluated, rejected if it’s nonfree or just poor quality, rather than incentivizing people to pretend other people’s code is their own, risking bigger consequences for the whole project. People can be honest if they got inspiration from stackoverflow, a reference book, or another project, if they are allowed to be.

        I’m not saying AI should be blanket allowed, the submitter needs to understand the code, enough to be able to revise it for errors themselves if the devs point out something. They can’t just say “I asked AI and it’s confident that the code does this and is bug free”.

      • hitmyspot@aussie.zone
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        Not all ai, or rather, llm output is slop. Some is useful. The reason for review is to differentiate. I’m not just talking about coding. I’m talking about their actual useful functionality.

        It would be great if they didn’t hallucinate, or produce slop. It would also be great if the fact that companies use them instead of workers meant we worked less hours and had more leisure time rather than less paying jobs and more stress. The llm is not at fault for the structure of society.

        Llm and ai is a tool. If used appropriately, there should be no issue. Of used inappropriately, it should be called out. Certainly where there is a risk of it appearing on the surface to be good, but not actually good,.like AI generated codez then marking it as such seems reasonable. Banning it doesn’t get rid of it. It hides it. It exists and is now in the world. We need to have policies that support appropriate use.

        • cloudskater@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          24 hours ago

          I’m sorry, but no matter how many times I hear this argument, it never addresses the issues with AI that exist regardless of its usecase. There are plenty of other unacceptable things in this world that we apply strict bans to. No, it will never rid the world of the issue, but that doesn’t mean you concede to “appropriate” uses of the maliciously envisioned technology. Someone in the world will always be hungry, but that doesn’t mean we settle for mostly eradicating world hunger, we try to do all we can.

          No amount of “but it’s for a good purpose” with erase the issues inherent to LLMs and “generative” AI. I like the idea of pure tedium being automated in the future, but so long as its based on the this tech as it currently exists, any genuine attempt to make create something positive is a non-starter. I’m not a “luddite”, I don’t hate progress or new ideas, I simply refuse to support projects that rub shoulders with hyper-capitalist theft machines that destroy the planet.

          • hitmyspot@aussie.zone
            link
            fedilink
            arrow-up
            1
            ·
            24 hours ago

            In your analogy, we don’t ban processed food as some people go hungry. We use agriculture to feed as many as possible with better foods. We try to do better. But more production is generally better. That’s what AI is, the equivalent of processed food. It’s not real food, it’s less healthy but it’s functional.

            Same with ai. It is an input and output machine. It has costs associated. We assess the output on this merits and cost. If the output is slop, it should be discarded. If it is functional output, it gets used.

            • cloudskater@piefed.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              24 hours ago

              I knew I shouldn’t have used that analogy, because then the focus would be redirected to it and I’d end up defending it instead of the position it was meant to represent.

              I’ve said what I intended to say. I don’t wanna argue over the uses of AI when its the foundation itself that’s rotten. There’s no good way to make use of “gen” AI as it stands.

              • hitmyspot@aussie.zone
                link
                fedilink
                arrow-up
                1
                ·
                23 hours ago

                It’s fine you have that opinion. I disagree and so do many others. I’ve used ai to generate notes, checklists, letters,.emails, work templates etc.

                The output was correct and valid in most cases. What about the foundation is rotten, in your view? The fact that it’s based on other people’s work being regurgitated, or the environmental concerns, or how big tech is trying to leverage it to be an arbiter of knowledge and computing power? All are valid concerns, but they don’t mean the technology is inherently unusable or unethical.

                Banning it because of the views of some is unfair on the views of others. I do think that marking it is appropriate, so that anyone who objects to its use can avoid it. I would be concerned that over time or becomes impossible to avoid though. However, that’s the point of open source. People can fork projects at the point where there is no AI code (except in the case where that is purposefully obfuscated).

                • cloudskater@piefed.blahaj.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  23 hours ago

                  “What about the foundation is rotten, in your view? The fact that it’s based on other people’s work being regurgitated, or the environmental concerns, or how big tech is trying to leverage it to be an arbiter of knowledge and computing power? All are valid concerns, but they don’t mean the technology is inherently unusable or unethical.”

                  It literally does. There’s no point in this discussion if we’re disagreeing over something so fundamental.

    • wheezy@lemmy.ml
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Great perspective and response. Far too many “fuck AI” people are literally advocating for the equivalent of “fuck computers” and “more tedious labor please!”

      The reason you should hate AI should be related to it’s exploitation of labor and it’s over use leading to energy and environmental impacts. Trying to ban AI for all applications is just counter productive and impossible. If the anti AI crowd is just filled with people that want it banned outright for everything, well, then the pro AI crowd that wants to slam it into anything and everything will win out.

      We need to be pointing to good applications of AI that can benefit open source projects in a responsible way as examples of how it should be used. Not spamming them with hate comments because “AI bad”

      • ell1e@leminal.spaceOP
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        1 day ago

        far too many “fuck AI” people are literally advocating for the equivalent of “fuck computers” and “more tedious labor please!”

        Not what I’m advocating for.

        We need to be pointing to good applications of AI

        Feel free to do so, but studies are not on your side. Edit: this is a reminder we’re talking about LLMs for code and documentation.

        The only somewhat clearly useful use case appear to be code reviews, but then you don’t need to actually allow submitting any LLM rewritten code or text since code reviews can be done using natural language. And if you use server-side LLMs, you’ll probably agree to ToS that they steal your data.

        And LLMs seem to be amazing at plagiarism.

  • cloudskater@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 day ago

    I’m glad I moved to PieFed a few months after joining Lemmy. Tankies have no real values and will claim to be radicals while holding no better values then our oppressors.

    • uuj8za@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      Yeah! I was ok with Lemmy, but recently (unrelated) decided to try Piefed. I’m liking Piefed better. Lots of nice UI/UX improvements over Lemmy. Didn’t realize what I was missing.

    • Lost_My_Mind@lemmy.worldM
      link
      fedilink
      arrow-up
      8
      ·
      2 days ago

      Well…dang it! I guess I’ll have to start using my piefed account more. But also, that doesn’t solve the issue of it being the same content, you know, because of the whole concept of the fediverse and how it works.

      Also this whole community is on Lemmy. How’s THAT going to work???

      • uuj8za@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        that doesn’t solve the issue of it being the same content, you know, because of the whole concept of the fediverse

        Isn’t that a feature, not a bug? Lemmy can’t single-handedly ruin the fediverse. Piefed, MBin, can lead a different direction and Lemmy can’t hold all the content hostage.

      • ell1e@leminal.spaceOP
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        It’s sad. I’m hoping perhaps some well-reasoned comments might still have some impact, but I admit that it might be a long shot.