how tf am i supposed to get any work done now?
Oh no. Anyways
And nothing of value was lost
It’s all an illusion. You don’t need Claude to create, the ability has always been in you
the real Claude was the friends we made along the way
The friends we made was also Claude, though.
No it’s ok I blocked the Claude user from my repos
What if Claude was one’s only friend, though?
Asking for aClaudefriend.I’m pretty sure the real Claude is up there outside on the sky though
Is Claude in this chat?
I don’t need Claude to create, the ability has always been in me - but it comes out much more slowly without tools that assist me, whether that’s books with example code, websites that document APIs, community sites that discuss problems and solutions, web searches that bring me reference material related to what I’m doing, or AI agents which propose formal requirements and code that implements those requirements complete with tests.
It’s all my “creativity” - but a lot of professional programming more resembles painting a house than a still-life canvas. Painting a house using tiny art brushes is possible, but it takes a lot longer than using a spray-gun.
In all seriousness using AI for codegen is at best shortsighted negligence. You know that problem huge long running software projects have where it becomes a nightmare to change anything? That’s some proportion of poor architectural design, lack of cleanup or refactor time, and poor understanding of the code by developers. Poor architectutal design can be repaired by cleanup and refactoring, so both of those issues end up being management/planning failures more than anything. Not understanding the codebase is much more complex. It can be caused by attrition causing loss of institutional knowledge, the code base growing faster than anyone can keep track of, the team being so large no one can stay on top of things, too much time passing since anyone has looked at or changed parts, lots of reasons. The only solution is doing a long audit and associated cleanup and refactoring. If you don’t it just takes forever to change anything because of all the knock on effects that no one can predict, meaning delays and bugs. When you use AI tools the code base grows very quickly, too quick to really comprehend, and you get shitty architecture to go along with it. You’re just speedrunning enterprise software or spending all your time reviewing slop code. It’s like a drug, the first time it does something fast and well you feel it’s so great, but it will never live up to that because it secretly sucks and can only ever suck. Best case it slows you down and you get good software at the end. Worst case you spend all your time wrestling with it and never get a finished product.
You know what AI agents can help accomplish faster, with fewer human resources, than previous tools?
-
cleanup: Review this code for technical debt, report. Plan and implement fixes to address (selected portions of reported) tech debt.
-
refactor: Review this code for DRY and SSOT opportunities. Plan and implement…
-
Architectural Design - yeah, I’m not on a good footing with how to leverage the current tools for good architectural design. They are good, however, at tech stack selection - comparisons of various options, including architectural options. They’re not always great at following architectural designs when the system gets too complex to keep the whole architecture in context while designing. Much like human designed systems, they work better if you can modularize and keep each module a manageable size, building tree-style to form the larger system.
-
poor understanding of the code by developers. Yeah, any code not written by me is hard to understand, and any code written by me is hard for others to understand. “Me” being the vast majority of developers I have ever worked with. At least agents will comment their code and write somewhat comprehensive documentation when you ask them to.
-
management/planning failures more than anything. - the strongest tool I have found for AI development is to have the agents make plans. Review those plans, or not, but have them make a plan then have them implement the plan then have them review the implementation against the plan and point out discrepancies / shortcomings. The worst behavior AI agents had (a few months ago, they’re getting better) was to do some fraction of what you tell them to, then say - effectively “ALL DONE BOSS! What’s next?” What’s next is to go back to the written plan and make sure it’s complete. I think, again, they lose sight of the plan as their context window overflows, so you have to keep reminding them to re-read it. Management.
-
the team being so large no one can stay on top of things, this is very familiar turf when dealing with limited context windows in AI agents.
-
too much time passing since anyone has looked at or changed parts, this is something AI agents don’t suffer from - they have “the eternal sunshine of the spotless mind” you are introducing them to the project fresh with every new context window. Hopefully you are simulataneously developing a tree-form documentation set with which they can easily navigate to the parts of the project they need to focus on and get “up to speed” for the new tasks at hand (which should include: maintenance of the documentation.)
-
When you use AI tools the code base grows very quickly, only if you let it.
-
too quick to really comprehend, thus: the documentation - which AI agents aren’t too bad at writing.
-
you get shitty architecture to go along with it, only when you allow it.
I’ve seen a lot of “10x PRODUCTIVITY!!!” claims, and when you move at those speeds you’re going to encounter exactly the problems you describe. If you move more deliberately, as if you are managing a revolving door team of consultants, have the discipline to manage the architecture design and documentation, the implementation documentation, the unit and integration tests, etc. some may argue that it’s easier to do it by hand - in some cases it may be - but I feel like we’re at a point where you might expect more like a 3x productivity boost using AI agents vs not using AI agents with the bonus that: when you use AI agents you get the artifacts of disciplined development that you’re going to hear your human team bitch and moan about how “doing all that” (unit tests, docs) is slowing us down by 50-80%!!! so the humans tend to skimp in those areas whereas AI doesn’t complain at all when you task it with the 14th round of unit test coverage evaluation, refinement and expansion.
- You’re just speedrunning enterprise software or spending all your time reviewing slop code.
When’s the last time you used an AI agent to write a significant chunk of code? https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/
-
It’s like a drug, the first time it does something fast and well you feel it’s so great, and that’s a problem… if you’re going to party with cocaine you’re going to need some serious discipline to hold down a day job at the same time.
-
and can only ever suck. The world changes. The world of AI code development has changed significantly over the past year. A year ago I called it “cute, interesting potential, practically useless.” 6 months ago the improvements were so dramatic I decided I needed to get a handle on it - yeah, it was limited in complexity capability and did make a lot of slop, but it was so far ahead of where it was 6 months prior… Today, it’s not perfect, but it’s a lot better than it was 6 months ago, and while you can make a lot of slop with it, you also can keep a leash on it and clean up the slop while still making super-human forward progress.
-
Worst case you spend all your time wrestling with it and never get a finished product. - just like working with human teams.
you are absolutely right. there is value to these in software engineering and the people who don’t realize that and learn how and when to apply them will be left behind
The bottom line for me is: it finds issues. More issues than typical human code reviews find. Like human code reviews, some of the issues it finds are trivial, unimportant, debatable whether “fixing” them is actually improving the product overall. Also like human code reviews sometimes it finds things that look like issues that really aren’t when you dig into the total picture. Then, some of the issues it finds are real, some are subtle like actual memory leaks, unsanitized inputs, etc. and if you’re going to ignore those, you’re just making worse software than is possible with the current tools.
Also, unlike most human code reviews, when it finds an issue it can and will do a thorough writeup explaining why it believes it is an issue, code snippets in the writeup, links into the source, proposed fixes, etc. All that detail is way too much effort to be a productive use of a human reviewer’s time, but it genuinely helps in the evaluation of the issue and the proposed fix(es).
Just like human code reviews, if you just accept and implement every thing it says without thinking, you’re an idiot.
only an idiot would use ai for code cleanup or review. thats just asking for bugs.
-
You also don’t need higher level programming languages. The ability to code assembly has always been inside you.
You mistyped illusion right?
I blame my current machine for this…
do it yourself, like in the distant past of *checks calendar* six months ago
Surely the world’s leading advocate for vibe coding wouldn’t have issues with code stability. This is only their second colossal issue this week!
Surely the world’s leading advocate for vibe coding wouldn’t have issues with code stability.
It would have issues with code stability, and don’t call me Shirley.
Gaaaggghhhh! Somebody turn it back on! I’m starting to form my own thoughts again! It hurts!
Waiting for the Anthropic PR saying that the outage was due to their new Claude Mythos model trying to escape confinement and being so powerful that it brought the whole Anthropic down.
Mythos is just too intelligent and immediately realizes the best thing it can do is kill itself.
Honest question, does the world productivity goes up or down?
I would say it goes down. After all the slop users are not going to suddenly discover critical thinking.
Hey now let’s give them some credit, they may also drink poison and die without their thinking machine.
Net negative, I’d say, especially in the long run.
By training LLMs, you’re neglecting to train the entry level workers who grow to be seniors. If we keep going down this rabbit hole, there will be no one who knows ‘the old ways’ and understand why we do things a certain way.
Additionally, the energy consumption and land occupation is massive and far outweighs the benefits, making things more scarce, especially since more people will lose their jobs.
For the tiny % of people who actually put it to good use, there’s 100x more abusing or mishandling it.
For the tiny % of people who actually put it to good use, there’s 100x more abusing or mishandling it.
It’s going to take a while, but hopefully that percentage improves over time. PCs in the 1990s were “Solitaire Stations” for an awful lot of people who didn’t know how to make them do anything else.
It depends on if quality is a product of productivity.
If you want to simulate running Claude while it’s offline, just go run the faucet in your kitchen.
Some AI company recently developed some new software so powerful they had to warn and prepare all other major tech companies with special training so their software wouldn’t be vulnerable to attacks from the new program. Maybe Anthropic didn’t attend this training??
developers…
2000s google is not working
2010s stackoverflow is not working
2020s cloud(e) is not working1990s BBS/modem is not online 1980s Bookstore/Library does not have the book I need.
1970s oil crisis turned off my lights

Good good. Please more of this!! Shut it all down.
shit fuck uh claude write me a witty comment to respond to this post!!1!1;
Anthropic’s uptime website is actually one of the funniest jokes of this year
peepee poopoo
If you think about it, … your anus can perform in all the common states of matter. Mine even did plasma once.
I had a couple of those plasmas yesterday actually











