It’s a great way to get free training for their next model, courtesy of unwitting OSS reviewers.
Spam all the open source projects with slop, mark which ones get rejected and which ones get accepted, and bam there’s some new training data for Claude Villanelle, and the only time they’ve wasted is other people’s.
I’ve been pondering why all the FOSS PR slop for ages, this HAS to be it.
Ai pushers are dishonest and malicious.
In other news, water is liquid. More at 11.
I don’t have a problem with AI assisting with open source projects. On its face, it could be helpful to clean up some basic coding problems so a person with skill can come in and update later or remove it if it’s truly awful code. But then I remember that there’s always an angle. On top of all the other issues with AI coding, what happens if Anthropic tries to pull some legal shenanigans and say that they wrote most of the code, so they own the project? What if they are writing in backdoors and vulnerabilities?
Like I said, on its face it sounds okay, but any time a corporation tries to touch a public project, things go wonky.
Bigger problem is AI writes so much code and adds so many features that are jank, if the commits are accepted the whole project risks being like a jank. I doubt anthropic can claim open source project is their work.
If that’s not illegal, it certainly should be.
For sure they know they shouldn’t be doing it, otherwise they wouldn’t be trying to hide it.
They hide it because of prejudice. See this community for inexhaustible examples as to why hiding accreditation of the tools used to perform a task is necessary.
They hide it because they’re saboteurs and are intentionally ruining open source projects to protect their own market share. If every open source project is ruined by slop, there will be no choice but to use closed source proprietary software. They’re the enemy.
That’s an interesting theory. I don’t think it’s right, particularly because the motives make zero sense, but it’s interesting nonetheless. In the same way that ‘lizard people are controlling humanity from subterranean bunkers’, ‘the Earth is flat’ and ‘birds aren’t real’ are all interesting theories.
Maybe I’m just missing the facts… What closed source, proprietary software does Anthropic have for sale, that there is an open source alternative for?
I don’t think it’s a coherent goal of torpedoing a direct competitor but the way the sloppers seem to do things is a lot of “just trust me bro, use AI for everything”. No actual thought given to anything, especially guidance in how to go about using it.
In that regard, any decently-engineered software is competition to them. And therefore “break everything so LLMs are the only tool” seems to be the strategy they’ve chosen.
The other less-malicious possibility is that they use those tools and want to submit patches, but of course they’re eating their own dog shit so no one wants it. So they try to hide it.
Proprietary software developers are sabotaging their open source competition, why doesn’t that make sense to you?
And see you for an example of the precise degree of arrogant shitfuckery that makes people hate LLMbecile slopmongers.
If people don’t want your slop, you don’t fucking give them your slop you ignorant fucking cunt!!!
Move on to where your shit is welcome.
That was quite the temper tantrum. Feel better, ya luddite?
Move on, back to your cave without electricity, where the scary Internet and it’s technologies can’t hurt you. Your closed mind is welcomed and can even be celebrated there, alone, in the dark, by you.
awww those poor oppressed AI bros, nobody understands them, we need to get them classified as a protected group ASAP, maybe run an AI Pride parade to encourage them to come out of hiding and admit to who they truly are and live free of prejudice
They’re fluffing their résumé before the bubble pops. Don’t hire these clowns, interview them and ask about their code.
Oh, I didn’t even consider that. Like using open source code to train their program and refine it’s coding capabilities.
Oh, that is slimy as fuck. 😡
Lmao, the bad example “1-shotted by Claude”
I get the idea of hating this, but there’s really absolutely nothing revolutionary about this. Being “undercover” is as trivial as “commit this, do not mention AI”.
In the end, at least with code, it’s the actual resulting quality that is the main determinant of what should be accepted or not.
You sound like someone who hasn’t had to waste countless hours of their life wading through bullshit merge request spam.
Not trying to be glib, but I don’t think you do get the idea of hating this.
So… you think ignoring the rules set by others is allowed if you can bypass them? Because it really does tell much if a repo states it does not want AI generated code, but Claude hides the fact.
I feel like you’re responding to a person who doesn’t understand consent is about saying yes not about saying no
I’m generally anti CLA in open-source as the license etc are self explanatory but it’s things like this that make me question that stance.
I think a proven best effort is still worth pursuing, even if you have bad actors either trying to just pad their C.V. or outright poison OSS code licensing, because I don’t know, they just hate forests
PSA: Prompting an LLM at length about what not to do is the best way to prime it to do that very thing. You’re loading a lot of tokens in memory and expecting a single “not” to do all the heavy lifting.
This is adjacent to ironic process theory.
Is this necessarily true? I remember seeing an article a while back suggesting that prompting “do not hallucinate” is enough to meaningfully reduce the risk of hallucinations in the output.
From my fairly superficial understanding of how LLMs work, “don’t do X” will plot a completely different vector for the “X” semantic dimension than prompting “do X”. This is different to telling a human, for example, to not think about elephants (congratulations, you’re now thinking about elephants. Aren’t they cute. Look at that little trunk and smiley mouth)
Thank you for your reply. I realised I don’t have enough deep knowledge about LLMs apart from empirical experience from working with it to confidently answer your question. It would be interesting to find (or create if it doesn’t exist) more research on the subject.
dont think about the game, or else you will lose it
The only comfort I take from your reply is that you lost a little bit before me
GODDAMIT
It’s possible that whatever prompt enhancement and processing happens around the LLM part of the application addresses this somewhat.
One of my loved ones is defending this and I am having a moral crisis over my relationship with her because of that.
Have AI write any message to her, see if she likes it.
They probably will, that’s the riskiest part
Yeah, it is hard grasping why online commenters that are fans are fans, but in my real world interactions, I get a better feel for it.
The people that are all in on the AI, slop and all, are the people I really found annoying to begin with. They tend to think everyone is desperate to hear what they say, that verbosity is king, and generally don’t really know what they are talking about. They are the sort that would spend a ton of time fretting over some ‘design document’ that when finally shared is absolutely nothing actionable, despite 10 pages with of gorp. Any specific outcome has nothing to do with the document, but they’ll take credit for “thought leadership” if it works, and blame the “inadequate team” if it fails. They are used to and cherish verbose yes men and are used to making vague statements and getting results they can’t judge already.
Or on the other end, people who endlessly fell for clickbait. Slop before AI was really a factor in slop. People forwarding those chain letters back in the day.
The people I have held long respect for tend to range between “too annoying to even deal with” to “it’s a little useful in key circumstances”. I have yet to personally meet someone I had long respected who went all in on AI.
The insidious thing is I’m pretty sure they both outnumber and tend to have more power. Those folks who “thought lead” without actionable direction nor even a vague understanding of how the work happens? Those are the ones that got promoted, with the good ones largely overlooked for promotions, mainly because at a certain point promotion requires “professional networking” and making the executives happier with themselves than it is about good work. Now we are in a position where those people who never “got” the work are telling themselves that the LLMs can replace those annoying “nerds” that have leverage over them, and if there’s one thing they can’t stand, it’s having people they don’t understand having anything looking like leverage over them.
Time to break up
Oh we broke up romantically last year. I don’t just stop loving people because we’re not a fit to be girlfriends.
Good advice, Shill Bot! But you should have specified they use Nvidia hardware to make it an effective shill. What if they use ATi? How will that help your owners turn a profit? Silly shill bot.
Quick, call her a slob that slops on her slot at slats! Then she’ll know you’re a true member of the erudite luddites.
I did tell her “I don’t enjoy having my ass kissed by a machine” and that had approximately the effect you’re looking for.
Some are actually pretty good at it. Have you tried the Lovense models? They’ve really got the feeling of a tongue down.
Have you tried inviting them to this echo chamber to see if that will convince them?
put some prompt hijacking stuff into your contributing guide so that the slop generators identify themselves, then just ban them. or even better, make some kind of publicly available list of those accounts or EVEN better, a browser extension. fuck ai
The open-source developers should fight back with anti-AI spam
Ai written code is not copyright-able. I wonder if that is connected to this.
And given that ai generated content (at least used to) poisons generative AIs… and open source is used to trade AIs…
The company I work for keeps trying to push Claude on us, even is company “social” situations. I never bothered to sign up for an account back when we were prompted so I guess I miss out…oh no?
No, wait - the opposite of oh no.
on ho.
hell yeah
Interesting comments in the mastodon thread, some idiot people will bend over backwards to defend AI slop.














