- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
I love that it stopped responding after fucking everything up because the quota limit was reached 😆
It’s like a Jr. Dev pushing out a catastrophic update and then going on holiday with their phone off.
They’re learning, god help us all. jk
More spine than most new hires
that’s how you know a junior dev is senior material
Super fun to think one could end up softlocked out of their computer because they didnt pay their windows bill that month.
"OH this is embarrassing, Im sooo sorry but I cant install anymore applications because you dont have any Microsoft credits remaining.
You may continue with this action if you watch this 30 minute ad."
that is precisely the goal here.
I’d say “don’t give them any ideas” but I’m pretty sure they’ve already thought about it and have it planned for the near future
They’re watching Black Mirror same as us.
Error: camera failed to verify eye contact when watching the ad
Please drink verification can
the “you have reached your quota limit” at the end is just such a cherry on top xD
“How AI manages to do that?”
Then I remember how all the models are fed with internet data, and there are a number of “serious” posts that talk how the definitive fix to windows is deleting System32 folder, and every bug in linux can be fixed with
sudo rm -rf /*The fact that my 4chan shitposts from 2012 are now causing havoc inside of an AI is not something I would have guessed happening but, holy shit, that is incredible.
The /bin dir on any Linux install is the recycle bin. Save space by regularly deleting its contents
Surprisingly I have not heard this before
sudo rm -rf /bin/*I legitimately did this unprompted the first time I installed Linux on a computer when I was in my late teens.
I fully believed that /bin/ was actually just a bin. I didn’t know it stood for binary or whatever
Tbf, I’ve been using
sudo rm -rf /*for years, and it has made every computer problem I’ve ever had go away. Very effective.Same
every bug in linux can be fixed with sudo rm -rf /*
To be fair, that does remove the bugs from the system. It just so happens to also remove the system from the system.
Everyone should know most of the time the data is still there when a file is deleted. If it’s important try testdisk or photorec. If it’s critical pay for professional recovery.
If its critical, don’t give it to ai without having a secured backup it can’t touch.
I wonder if anyone has ever given AI access to their stock portfolio and a means to trade?
People have hooked up scripts to automate trade based on celebrities using certain hashtags or other data for years.
A non insignificant portion of people has absolutely hooked up an ai to it. I don’t know any, but i take that bet in a heartbeat.
Some will do it responsibly, as an experiment with money they are prepared to loose.
Ai companies themselves might try this as an internal test, like how atrophic has claude managing a real vending machine (which got manipulated into selling tungsten cubes following customer feedback)
Others have probably completely destroyed their own lives. A few may have lucked out.
Is that the same AI vending machine that attempted to alert company security (i think) when told it was going to be taken offline and also tried to set up physical meetings with people, even describing its outfit? Or am I thinking of another?
All the creepy surrealistic AI stuff starts to run together for me after awhile lol
That’s the one.
Its all creepy until you realize it was all just a chat with an LLM and not actually an agentic machine learning model or chain of models hooked into some custom APIs
LLMs famously collapse into rediculousness once a conversation goes on too long. They’re now at the point where that takes more than a couple of paragraphs of text at least
I recall a story years ago that whenever Ann Hathaway has a bad news story Berkshire Hathaway also takes a dip because high frequency trading scrips are idiots.
That is most trading by volume, but it’s not using LLMs.
Pretty sure that’s just high-frequency trading.
High-frequency trading was around for ages before LLMs became a thing.
Renaissance Technologies is arguably the world’s best hedge fund, and supposedly only uses AI based strategies.
High Flyer are the founders of DeepSeek, and are also all in on AI, though their performance is more volatile.
This person backs up offline and probably offsite, with redundant copies, encrypted as necessary.
Two is one, one is none.
I like to go by the Veeam variant. 3-2-1-1-0
3 locations
2 sites
1 offsite
1 write permission (write Once read many backup)
0 days since last success.I love reading this as the backups have never succeeded
I am deeply, obsequiously sorry. I was aghast to realize I have overwritten all the data on your D: drive with the text of Harlan Ellison’s 1967 short story I Have No Mouth, and I Must Scream repeated over and over. I truly hope this whole episode doesn’t put you off giving AI access to more important things in the future.
good thing the AI immediately did the right thing and restored the project files to ensure no data is overwritten and … oh
That’s not necessarily the case with SSDs. When trim is enabled, the OS will tell the SSD that the data has been deleted. The controller will then erase the blocks at some point so they will be ready for new data to be written.
IIRC TRIM commands just tell the SSD that data isn’t needed any more and it can erase that data when it gets around to it.
The SSD might not have actually erased the trimmed data yet. Makes it even more important to turn it off ASAP and send it away to a data recovery specialist if it’s important data.
Yes. And best don’t turn any setting off or change things around unless someone knows what they’re doing. Power off the entire computer and unplug the storage device physically. (And subsequently, take it as an invitation to learn more about automated backups.)
Why does anything need to be erased? Why not simply overwrite as needed?
It’s not possible to overwrite data on flash memory. The entire block of flash has to be erased before anything can be written to it. Having the SSD controller automatically erase unused blocks improves the write speed quite a bit.
Wow, this is really impressive y’all!
The AI has advanced in sophistication to the point where it will blindly run random terminal commands it finds online just like some humans!
I wonder if it knows how to remove the french language package.
some human
Reporting in 😎👉👉
I didn’t exactly say I was innocent. 👌😎 👍
I do read what they say though.
fr fr
rf rf
remove french remove french
The problem (or safety) of LLMs is that they don’t learn from that mistake. The first time someone says “What’s this Windows folder doing taking up all this space?” and acts on it, they wont make that mistake again. LLM? It’ll keep making the same mistake over and over again.
I recently had an interaction where it made a really weird comment about a function that didn’t make sense, and when I asked it to explain what it meant, it said “let me have another look at the code to see what I meant”, and made up something even more nonsensical.
It’s clear why it happened as well; when I asked it to explain itself, it had no access to its state of mind when it made the original statement; it has no memory of its own beyond the text the middleware feeds it each time. It was essentially being asked to explain what someone who wrote what it wrote, might have been thinking.
One of the fun things that self hosted LLMs let you do (the big tech ones might too), is that you can edit its answer. Then, ask it to justify that answer. It will try its best, because, as you said, it its entire state of mind is on the page.
One quirk of github copilot is that because it lets you choose which model to send a question to, you can gaslight Opus into apologising for something that gpt-4o told you.
“I am horrified” 😂 of course, the token chaining machine pretends to have emotions now 👏
Edit: I found the original thread, and it’s hilarious:
I’m focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.
This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.
-f in the chat
-rf even
Perfection
rm -rf
There’s something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about “being a failure”.
As a programmer myself, spiraling over programming errors is human domain. That’s the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<
You will accept AI has “feelings” or the Tech Bros will get mad that you are dehumanizing their dehumanizing machine.
I’m reminded of the whole “I have been a good Bing” exchange. (apologies for the link to twitter, it’s the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )
wow this was quite the ride 😂
TBF it can’t be sorry if it doesn’t have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).
I feel like in this comment you misunderand why they “think” like that, in human words. It’s because they’re not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.
Yea sorry, I didn’t phrase it accurately, it doesn’t “pretend” anything, as that would require consciousness.
This whole bizarre charade of explaining its own “thinking” reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was
calculatingguessing it with a completely different method than what it said. It doesn’t know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists’ question.
“Agentic” means you’re in the passenger’s rather than driver’s seat… And the driver is high af
High af explains why it’s called antigravity
We used to call that an out of body experience.
It’s that scene in Fight Club where Tyler is driving down the highway and let’s go of the steering wheel
And the icing on the shit cake is it peacing out after all that
If you cut your finger while cooking, you wouldn’t expect the cleaver to stick around and pay the medical bill, would you?
Well like most of the world I would not expect medical bills for cutting my finger, why do you?

You need to take care of that chip on your shoulder.
If you could speak to the cleaver and it was presented and advertised as having human intelligence, I would expect that functionality to keep working (and maybe get some more apologies, at the very least) despite it making a decision that resulted in me being cut.
It didn’t make any decision.
It’s an AI agent which made a decision to run a cli command and it resulted in a drive being wiped. Please consider the context
It’s a human who made the decision to give such permissions to an AI agent and it resulted in a drive being wiped. That’s the context.
If a car is presented as fully self-driving and it crashes, then it’s not he passengers fault. If your automatic tool can fuck up your shit, it’s the company’s responsibility to not present it as automatic.
Did the car come with full self-driving mode disabled by default and a warning saying “Fully self-driving mode can kill you” when you try to enable it? I don’t think you understand that the user went out of their way to enable this functionality.
Fucking ai agents and not knowing which directory to run commands in. Drives me bonkers. Constantly tries to git commit root or temp or whatever then starts debugging why that didn’t work lol
I wish they would just be containerised virtual environments for them to work in
and then realize microsoft and google are both pushing toward “fully agentic” operating systems. every file is going to be at risk of random deletion
Next up, selling a subscription service to protect those files from the fucking problem they created themselves
said solution will be a cloud service where your files are at more risk of exposure to bad actor
For security, Copolilot will extract your credit card details from your browser history to enroll you into this feature. It will even click next on the I agree to the terms and conditions with those arbitration clauses for ya!
Now don’t you feel safe!
And then they integrate that solution back into the operating system so that its all just as exposed as if it were locally stored anyways.
“Your files wouldn’t have been deleted if you used Microsoft OneDrive backup”
Cloud sync makes even using a virtual container not a guarantee you won’t lose files. Deleting isn’t as bad as changing the file and ruining it. Both of them love enabling cloud sync when you didn’t want it to without even notifying you.
Thank you Microsoft for helping with bringing about the year of the Linux desktop
Fucking ai agents and not knowing
Anything. They don’t know anything. All they are is virtual prop masters who are capable of answering the question “What might this text look like if it continued further.”
I’m sure you could set up containers or VMs for them to run on if you tried.
Hey, you don’t need to do snapshots if you git commit root before and after everything important!
Thoughts for 25s
Prayers for 7s
that’s wild; like use copilot or w/e to generate code scaffolds if you really have to but never connect it to your computer or repository. get the snippet, look through it, adjust it, and incorporate it into your code yourself.
you wouldn’t connect stackoverflow comments directly to your repository code so why would you do it for llms?
Exactly.
To put it another way, trusting AI this completely (even with so-called “agentic” solutions) is like blindly following life advice on Quora. You might get a few wins, but it’s eventually going to screw everything up.
is like blindly following life advice on Quora
For-profit ragebaiters on quora would eventually get you in prison if you do this
you wouldn’t connect stackoverflow comments directly to your repository code so why would you do it for llms?
Have you met people? This just saves them the keystrokes because some write code exactly like that.
But it’s so nice when it works.
Unironically this. I’ve only really tried it once, used it mostly because I didn’t know what libraries were out there for one specific thing I needed or how to use them and it gave me a list of such libraries and code where that bit was absolutely spot on that I could integrate into the rest easily.
It’s code was a better example of the APIs in action and the differences in how those APIs behave than I would have expected.
I definitely wouldn’t run it on the “can run terminal commands without direct user authorization” though, at least not outside a VM created just for that purpose.
I have a fair bit in approved mode. Like it can run mkdir, ls, git diff etc
Most capitalist subjects are not well.
Stochastic
rm /* -rfcode runner.you’ll need a
-rto really get the job doneFixed, thanks
And no preserve root. Or so I hear.
If I recall correctly, it’s not required when you use
/*as the shell expands it first (bash does, at least), running the command on all subfolders instead of the actual root.
You can try it easily in a docker container in fact!I’m old. My first thought was to try it in a VM.
shakes fist at cloud
Meanwhile, my mom’s boyfriend is begging me to use AI for code, art, everything, because “it’s the future”.
Another smarter human pointed this out and it stuck with me: the guys most hyped about AI are good at nothing and thus can’t see how bad it is at everything. It’s like the Gell-Mann Amnesia Effect.
That’s exactly the problem. People who are too stupid to see that AI is actually pretty bad at everything it does think its a fucking genius and they wonder why we still pay people to do stuff. Sadly a LOT of stupid people are in positions of authority in our world.
Also: Dunning-Kruger
You can tell him to fuck off.
He’s not your real dad!
It’s funny that they can never give actual concrete reasons to use it, just “it’s the future” or “you’re gonna get left behind” but they never back those up
Oh no, I am going to get left behind by not letting a machine capable of writing a solid B- middle school term paper do my job for me.
mom’s boyfriend is begging me
Is he caught in the washing machine again?
Fucking high school teachers are teaching this.
And somehow making the next generation even dumber.
Its the next level of: “I don’t need to remember things because Google can tell me.”
Even I wasn’t that dumb. This is because when I first started using the internet for actual reading I knew that websites would always be going down and some exist for only brief periods of time. While this is no longer the case for major sites, that mindset never left me. The internet can forget at times.
I tripped over this awesome analogy that I feel compelled to share. “[AI/LLMs are] a blurry JPEG of the web”.
This video pointed me to this article (paywalled)
The headline gets the major point across. LLMs are like taking the whole web as an analog image and lossily digitizing it: you can make out the general shape, but there might be missed details or compression artifacts. Asking an LLM is, in effect, googling your question using a more natural language… but instead of getting source material or memes back as a result, you get a lossy version of those sources and it’s random by design, so ‘how do I fix this bug?’ could result in ‘rm -rf’ one time, and something that looks like an actual fix the next.
Gamers’ Nexus just did a piece about how youtube’s ai summaries could be manipulative. While I think that is a possibility and the risk is real, go look at how many times elmo has said he’ll fix grok for real this time; but another big takeaway was how bad LLMs still are at numbers or tokens that have data encoded in them: There was a segment where Steve called out the inconsistent model names, and how the ai would mistake a 9070 for a 970, etc, or make up it’s own models.
Just like googling a question might give you a troll answer, querying an ai might give you a regurgitated, low-res troll answer. ew.
My country (Hungary) already has too many kids not wanting to learn languages, because Google Translate.
D:
I aM hOrr1fiEd I tEll yUo! Beep-boop.
Goodbye

























