That’s because it’s absolute shit.
That makes sense. People are starting to use it more, but for less elaborate things where they still have control.
I feel like this is a normal cycle of new tech. People get really excited about all the possibilities and don’t have any experience to ground expectations. Eventually people use it enough to realize what is more realistically achievable. Then mentally shifts from “magical solution to everything” to “a tool that is good at some things and bad at others”.
The problem for OpenAI and their ilk is that the actual legitimate uses of LLMs are so few and niche that they cannot hope to pay for the immense cost of developing and running these systems. Like, cool. Sure I can use copilot to generate derivative meme images, but what’s that actually worth to me monetarily? I’m not going to subscribe to a monthly service just to access a tool for shit posting.
That sounds like a problem for the people dumping money into these companies and keeping them afloat.
Often known as the “Gartner Hype Cycle”
I have had nothing but issues attempting to use AI to help with code. The amount of times it has given me something clearly incorrect, and I am not very experienced to be able to identify “wrong code” so easily, is way to high.




