Website operators are being asked to feed LLM crawlers poisoned data by a project called Poison Fountain.
The project page links to URLs which provide a practically endless stream of poisoned training data. They have determined that this approach is very effective at ultimately sabotaging the quality and accuracy of AI which has been trained on it.
Small quantities of poisoned training data can significantly damage a language model.
The page also gives suggestions on how to put the provided resources to use.
Been thinking about making one of these too, especially since I have a catchy name :
asbestosMe too, but with procedural image generation. Use some templates which are put together with CPU blitter (extremely fast and effective), add some random descriptive text, then done. Don’t know how much my theory would work IRL.
If, suppose, I were optimistic over this technology, but pessimistic over its current stage of development, I’d expect this to be a cure. It’s a problem they’ll have to solve. A test they’ll have to pass.
If somewhere inside those things someone makes a mechanism building a graph of syllogisms, no kind of poisoned input data will be able to hurt them.
So - this is a good thing, but when people say it’s a rebellion, it’s not.
Not all problems may be cured immediately. Battles are rarely won with a single attack. A good thing is not the same as nothing.
A test they’ll have to pass.
This makes me chuckle, as they invented euphemisms like ‘hallucinations’ because their LLM models can’t do what they promise. Fabulous marketing, but clearly they didn’t do enough testing.
as they invented euphemisms like ‘hallucinations’
Seems like a pretty accurate word to use, no? Could also use fabrication, concoction, phantom, or something else? I think “lie” and its synonyms are not accurate, since that requires intent. Since the LLM does not have intent, it cannot “lie”.
That’s why “bullshit,” as defined by Harry Frankfurt, is so useful for describing LLMs.
A lie is a false statement that the speaker knows to be false. But bullshit is a statement made by a speaker who doesn’t care if it’s true or false.
I said, in other words, that it doesn’t matter what they do until this problem is solved. So if this is described as some sort of rebellion against AI (or “AI”), then no. At the point where it becomes dangerous technology in itself and not just for economy, it won’t be.
Samsung and Anthropic published independently created data showing how little bad data it takes to effectively poison very large models. LLMs pretend to be complex, but they aren’t, they’ll not continue to improve at the initial rate we got used to seeing. Just ask OpenAI.
I’m not talking about LLMs. I’m talking about future developments learning on LLMs, eventually there will be some resolutions of conflicting knowledge and logical connections, otherwise they won’t become remotely as useful as advertised.
Gotcha. So it’s something you have imagined.
That’s called thinking
It is called imagination if it has not yet happened.
What if it has happened in my imagination?
“You’re not opposing me. All you’ve done is create a problem that will stop me until I have it figured out.” is the description of every struggle between opposing forces, so it’s interesting that you disagree with that.
Not really, more like “if I can find a key to the door, I can open it, so engraving a fixed combination for the door lock on the same key doesn’t change much”.
Poisoned data is fundamentally valid data. Concepts of logical connectivity and statements being true or false are something needed to use it.
You ascribe far too much to the internal workings than is reasonable.
I have around 10-20GB github / gitlab mirror. I am constantly under attack from crawlers from top US technology corporations and LLM startups. Whenever I ban one IP range they switch to other - I don’t know if those fuckers have tickets in their systems to do it manually or they just deploy this shit all over the planet. From what I observe during attacks that I mitigate the best way to poison them is to just create gitea instance with poisoned code repository and couple hundred revisions. It’s because what they are most interested in is html representation of diff between two git revisions.
Why isn’t there anything in the DMCA for stopping crawlers? They have stuff about requiring crawlers to follow attribution and whatnot, but nothing for not allowing crawlers in the first place. Stupid as shit.
Small quantities of poisoned training data can significantly damage a language model.
Source: trust me bro.
Nightshade tried the same thing and it never worked.
Here’s your source: https://www.anthropic.com/research/small-samples-poison
Nightshade did work on older models. Newer models adapted to prevent poisoning.
This is a new approach.
Ye, nightshade was defeated by a blur and sharpen iirc lol. Still, was a good first step.
Idiots: This new technology is still quite ineffective. Let’s sabotage it’s improvement!
Imbeciles: Yeah!
Corpos: Don’t steal our stuff! That’s piracy!
Also corpos: Your stuff? My stuff now.
Bootlickers: Oh my god this shoe polish is delicious.
You should select something: whether you like the current copyright system or not. You can’t do both.
Corporations want the existing copyright system for their own products but simultaneously want to freely scrape data from everyone else.
I see that as a copyright problem, not a specific LLM one.
This issue is largely manifesting through AI scraping right now. Additionally, many intentionally ignore
robots.txt. Currently, LLM scrapers are basically just bad actors on the internet. Courts have also ruled in favor of a number of AI companies when sued in the US, so it’s unlikely anything will change. Effectively, if you don’t like the status quo, stuff like this is one of your few options.This isn’t even mentioning of course whether we actually want these companies to improve their models before resolving the problems of energy consumption and potential displacement of human workers.
All crawlers ignore robots text since the very start. Anyway, if THAT is the problem then IT is a problem, not the LLMs as a whole.
If this were true (which is nearly impossible since you said “all”), stuff like Anubis wouldn’t exist since you could just toss up a crowd-sourced
robots.txtand be done with it.
Third thing: Point out obvious hypocrisy.
AI companies could start, I don’t know- maybe asking for permission to scrape a website’s data for training? Or maybe try behaving more ethically in general? Perhaps then they might not risk people poisoning the data that they clearly didn’t agree to being used for training?
Why should they ask permission to read freely provided data? Nobody’s asking for any permission, but LLM trainers somehow should? And what do you want from them from an ethical standpoint?
Much of it might be freely available data, but there’s a huge difference between you accessing a website for data and an LLM doing the same thing. We’ve had bots scraping websites since the 90’s, it’s not a new thing. And since scraping bots have existed we’ve developed a standard on the web to deal with it, called “robots.txt”. A text file telling bots what they are allowed to do on websites and how they should behave.
LLM’s are notorious for disrespecting this, leading to situations where small companies and organisations will have their websites scraped so thoroughly and frequently that they can’t even stay online anymore, as well as skyrocketing their operational costs. In the last few years we’ve had to develop ways just to protect ourselves against this. See the “Anubis” project.
Hence, it’s much more important that LLM’s follow the rules than you and me doing so on an individual level.
It’s the difference between you killing a couple of bees in your home versus an industry specialising in exterminating bees at scale. The efficiency is a big factor.
Is the only imaginable system for AI to exist one in which every website operator, or musician, artist, writer, etc has no say in how their data is used? Is it possible to have a more consensual arrangement?
As far as the question about ethics, there is a lot of ground to cover on that. A lot of it is being discussed. I’ll basically reiterate what I said that pertains to data rights. I believe they are pretty fundamental to human rights, for a lot of reasons. AI is killing open source, and claiming the whole of human experience for its own training purposes. I find that unethical.
Killing open source? How?!
The guy is talking about consulting as I understand. Yes, LLM is great for reading the documentation. That’s the purpose of LLM. Now people can use those libraries without spending ages reading through docs. That’s progress. I see it as a way to write more open source because it became simpler and less tedious.
He’s jumping ship because it’s destroying his ability to eke out a living. The problem isn’t a small one, what’s happening to him isn’t a limited case.
Yes, they should because they generate way more traffic. Why do you think people are trying to protect websites from AI crawlers? Because they want to keep public data secret?
Also, everyone knows AI companies used copyrighted materials and private data without permission. If you think they only used public data you’re uninformed or lying on their behalf.
I personally consider the current copyright laws completely messed up, so I see no problem in using any data technically available for processing.
Ok, so you think it’s ok for big companies to break the laws you don’t like, cool. I’m sure those big companies will not sue you when you infringe on some of their laws you don’t like.
And I like the way you just ignored the two other issues I mentioned. Are you fine with AI bots slowing sites like Codeberg to a crawl? Are you fine with AI companies using personal data without consent?
I’m fine with companies using any freely available data.
I’m also fine with them using data they can get for free like, I don’t know, weather data they collect themselves?
Data hosted by private individuals and open source projects is not free. Someone has to pay for hosting and AI companies sucking data with army of bots is elevating the cost of hosting beyond the means of those people/projects. They are shifting the costs of providing the “free” data on the community while keeping all the profits.
Private data used without consent is also not free. It’s valuable, protected data and AI companies are simply stealing it. Do you consider stolen things free?
I see your attitude is “they don’t hurt me personally and I don’t care what they do to other people”. It’s either ignorant or straight antisocial. Also a bit bootlickish.
As someone who self-hosts a LLM and trains it on web data regularly to improve my model, I get where your frustration is coming from.
But engaging in discourse here where people already have a heavy bias against machine-learning language models is a fruitless effort. No one here is going to provide you catharsis with a genuine conversation that isnt rhetoric.
Just put the keyboard down and walk away.
I don’t have a bias against LLMs, I use them regularly albeit either for casual things (movie recommendation) or an automation tool in work areas where I can somewhat easily validate the output or the specific task is low impact.
I am just curious, do you respect robots.txt?
I think it’s worthwhile to show people that views outside of their like-minded bubble exist. One of the nice things about the Fediverse over Reddit is that the upvote and downvote tallies are both shown, so we can see that opinions are not a monolith.
Also, engaging in Internet debate is never to convince the person you’re actually talking to. That almost never happens. The point of debate is to present convincing arguments for the less-committed casual readers who are lurking rather than participating directly.
I agree with you that there can be value in “showing people that views outside of their likeminded bubble[s] exist”. And you can’t change everyone’s mind, but I think it’s a bit cynical to assume you can’t change anyone’s mind.
I can’t speak for everyone, but I’m absolutely glad to have good-faith discussions about these things. People have different points of view, and I certainly don’t know everything. It’s one of the reasons I post, for discussion. It’s really unproductive to make blanket statements that try to end discussion before it starts.
For the same reason copyright and licences exist. You may be able to interact with something - because that’s what the license allows you - but still not be able to use it. Companies have faced million dollar fines for using code not subscribed to a license which allows them to do that. You may face trial if you distribute content (e.g. movies or music) you are only allowed to watch. The key here is that unless you are explicitly permitted to use something further it is considered illegal and punishable. Why would it be any different for AI training?
i would imagine companies would just filter it out
need some more clever way of hiding it or allow it to be self hosted so that it has various urls
If I am reading this correctly, anyone who wants to use this service can just configure their HTTP server to act as the man in the middle of the request, so that the crawler sees your URL but is retrieving poison fountain content from the poison fountain service.
If so, that means the crawlers wouldn’t be able to filter by URL because the actual handler that responds to the HTTP request doesn’t ever see the canonical URL of the poison fountain.
In other words, the handler is “self hosted” at its own URL while the stream itself comes from the same URL that the crawler never sees.
So it would be effective at preventing your site from being used as training data.
Doesn’t work, but I guess if it makes people feel better I suppose they can waste their resources doing this.
Modern LLMs aren’t trained on just whatever raw data can be scraped off the web any more. They’re trained with synthetic data that’s prepared by other LLMs and carefully crafted and curated. Folks are still thinking ChatGPT 3 is state of the art here.
From what I’ve heard, the influx of AI data is one of the reasons actual human data is becoming increasingly sought after. AI training AI has the potential to become a sort of digital inbreeding that suffers in areas like originality and other ineffable human qualities that AI still hasn’t quite mastered.
I’ve also heard that this particular approach to poisoning AI is newer and thought to be quite effective, though I can’t personally speak to its efficacy.
Faults in replication? That can become cancer for humans. AI as well I guess.
Let’s say I believe you. If that’s the case, why are AI companies still scraping everything?
Raw materials to inform the LLMs constructing the synthetic data, most likely. If you want it to be up to date on the news, you need to give it that news.
The point is not that the scraping doesn’t happen, it’s that the data is already being highly processed and filtered before it gets to the LLM training step. There’s a ton of “poison” in that data naturally already. Early LLMs like GPT-3 just swallowed the poison and muddled on, but researchers have learned how much better LLMs can be when trained on cleaner data and so they already take steps to clean it up.
Do you have any basis for this assumption, FaceDeer?
Based on your pro-AI-leaning comments in this thread, I don’t think people should accept defeatist rhetoric at face value.
A basic Google search for “synthetic data llm training” will give you lots of hits describing how the process goes these days.
Take this as “defeatist” if you wish, as I said it doesn’t really matter. In the early days of LLMs when ChatGPT first came out the strategy for training these things was to just dump as much raw data onto them as possible and hope quantity allowed the LLM to figure something out from it, but since then it’s been learned that quality is better than quantity and so training data is far more carefully curated these days. Not because there’s “poison” in it, just because it results in better LLMs. Filtering out poison will happen as a side effect.
It’s like trying to contaminate a city’s water supply by peeing in the river upstream of the water treatment plant drawing from it. The water treatment plant is already dealing with all sorts of contaminants anyway.
With the amount of AI generated horseshit out there already, they’ve already pissed in the well.
I don’t think this is a good idea. The pollution spreads. this would corrupt the collective knowledge of humanity a little faster than the AI already is doing.











