- cross-posted to:
- fuck_ai@lemmy.world
- PurchaseWithPurpose@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
- PurchaseWithPurpose@lemmy.world
cross-posted from: https://lemmy.world/post/44699253
This is clearly a sign that the product failed to draw in enough customers and its viability was overhyped.
Hopefully, it is the start of the AI bubble bursting.
RIP Sgt General Jessica Foster. Semper fudge.
Could this mean less wholly AI generated videos on YouTube? Please be so.
Those pathetic AI youtube commercials where there is some fake over muscled geriatric talking about some miracle cure are the worst.
I just close them out. I’m hoping somewhere in youtubes algorithm of suck they are paying attention to how much those ads are hated.People will just switch to using other tools like googles veo
Doesn’t that require a subscription though? It may not eliminate the slot videos, but that subscription is going to be a pretty substantial barrier to entry
Finally, a good news
It’s so they can repurpose that capacity for developing robots. It’s not good at all.
OpenAI told the BBC on Wednesday that it has discontinued Sora so that it can focus on other developments, such as robotics “that will help people solve real-world, physical tasks”.
Robots aren’t like software, it’s immediately obvious when they don’t work the way they’re advertised whereas chatbots can trick people into thinking they’re way more useful than they actually are. The “fake it till you make it” “move fast and break things” ethos of tech doesn’t work when there’s actual, physical evidence that shit’s busted.
Unpopular Opinion Incoming
I was assigned at work to evaluate a few LLMs for potential adoption, so I spent a solid week doing so.
Most of the “AI is broken and doesn’t work” on here is solid echo chamber cope. It’s more competent than several of my coworkers, though it’s thankfully not ready to replace knowledge workers as it requires a knowledge baseline to best direct it and evaluate its answers.
I still advised against using it for multiple reasons, including ethics, but much of Lemmy is playing make believe about the actual capabilities of LLMs.
Cool anecdote. Every time we actually see real data, though, the numbers don’t reflect much in the way of productivity gains or increased efficiency or better output. People say that LLMs are useful because it feels useful, but we aren’t seeing actual usefulness. The most recent study out of Duke University observes “a productivity paradox, in which perceived productivity gains are larger than measured productivity gains, likely reflecting a delay in revenue realizations.”
A delay. Sure.
I really appreciate your dismissive, arrogant tone. Your casual dismissing of my anecdote really added to how you provided even less substance to support your point.
But hey, it got you those “supporting the echo chamber by dunking on dissent” up votes, and that’s what we’re all here for, right?
Mind telling us what it is that you do? I heard similar things being said in the Plain English podcast last week (and the host was pretty anti-AI before) and I’m starting to wonder if certain jobs are going to be more affected than others.
Or are your coworkers just bad at what they do? :P When I was working tech support, there were people that were worse at their jobs than the bots of the time, let alone LLMs, I swear.
Electrical engineering. My mentioned coworkers are competent but more junior in the field. We did a miniature internal study and found the best models provided accurate, relevant information on the first prompt about 90% of the time when asked to explain or verify concepts. The remainder consisted of hallucinations or misunderstood queries.
They struggled with questions that instead required complex problem, providing some mixture of appropriate solutions, overly complex but still functional solutions, and hallucinated shite.
I recommended that we do not move forward with adopting AI in any capacity. While it has some utility for basic information retrieval and fact checking, it still required someone with sufficient knowledge to be able to quickly evaluate the quality of its output. Helpful for someone who knows what they’re doing, dangerous 10% if the time for someone who does not. I also highlighted the ethical concerns, many of which my peers were unaware.
Correct, thought there is still good news in a way: OpenAI is running out of money rapidly. So much so, that they have to pick and choose one thing over the other.
They would have done the robot thing anyways, but the fact that they had to shut something else down for it sbows that the massive deficit is starting to affect them pretty heavily.
Maybe im just coping, but imo, the cracks are getting bigger and bigger.
I think one of the reasons why consumer facing AI content is failing so bad is because we have had good video content for decades so it’s super obvious when a video is just off.
I think this relates to the main reason why AI is failing (or at least not popular with consumers). It automatically just means the product has less quality than you’ve been used to for your entire life. It hasn’t really provided anything new to consumers.
Good.
So many people seem to have no idea what they’re talking about. This isn’t ending AI video creation, it just cost them a lot of money to offer it. You can generate a video on your own computer already. AI video isn’t going away because one company isn’t letting people do it on their servers for free any more.
Didn’t realise you could do it locally, just checked online and there’s several options. So why are these fuckers building huge, resource-greedy data centres. . ?
Because they want to do a lot of it and faster than a home pc could so they can offer it as a service.
What you can do locally is slower and with much smaller models.
So they can charge you to do it on your phone…
Sweet now do the whole company
I absolutely can’t wait for more of this shit to start collapsing financially.
insert nelsonhaha.jpg

Openai is the canary
Let me get this straight: Disney was supposed to give Openai license for their characters, and on top of that invest billion dollars in the Openai? The money literally went the wrong way
Not really. Disney management has drunken the same Koolaid as any other management right now: they believe they can fire large parts of their staff and replace them with “AI”, allowing them to achieve similar or even greater productivity at a fraction of the cost (i.e. whatever fee "open"AI charges). To achieve that, they need to give Sora access to their characters (so it can be trained to produce Disney movies) and invest in the company (as a down payment; money that would be recuperated by eliminating workers from the equation).
As someone who named their daughter Sora in 2021, this is the best news I’ve gotten this year.
Congrats! 🥳🥳🥳
OpenAI said it will discontinue Sora, the generative-AI video creation platform it launched in late 2024, without providing a reason for the decision.
That is the strongest indication this is the beginning of the end for the AI bubble. Sora burned a ton of processing power, with no clear value proposition, just to keep the hype cycle going a little longer. Shutting down without explanation leaves the most likely one: they are out of helium to pump into the balloon. And if that balloon isn’t inflating, it’s deflating.
It’s not and probably the opposite.
When Sora launched it was way ahead. Seedance 2’s release was notably better than any of the other video gen models, Sora included.
The market is getting commoditized because there’s no moat and OpenAI hasn’t led on pretty much any release for a while now other than Sora, which they’re probably falling behind on now.
This is the opposite of a burst from a tech standpoint, even if OpenAI as a company starts to pop.
TL;DR: This is likely happening because the tech accelerated across the industry in ways OpenAI can’t catch back up to, not because it’s lagging.
Upvoted for a different perspective, but I suspect it ends in the same place.
OpenAI is kept solvent by investor capital, and capital is kept flowing by the perception of OpenAI being the market leader. Seedance being a better model, enough to cause OpenAI to exit the market, still ruptures the perception of value. In a market with no clear profitability path, that’s ground falling away.
It also can’t be simply commoditized because generations (I’m sure even Seedance) are expensive and still not good enough for production use, even if 50% of their consumer base might boycott if a major studio even did use it in production. Commoditization can’t occur when there’s still no economically self-sustaining, market-acceptable “good enough” product. Without that, even if the leader changes, it’s a race between lemmings (sorry) off the cliff.
Isn’t spending billions of dollars with nothing to show for it in the end the definition of a popping bubble?
No, it’s called the basic business model for tech companies since years. Sadly.
A bubble popping would be when people start asking for their ROI or sell.
Yep - they briefly led in video gen but quickly were overtaken by other groups. There are even open source local models that perform really well now.
They could conceivably catch back up but how does that help them when their priority is chasing the AGI/ASI dragon?
It was a weak attempt to keep relevance when faced against Gemini and Claude. But it’s completely unnecessary now that OpenAI has contracted with the government. They get all that sweet tax payer money and get to repurpose a ton of GPUs making stupid videos to supporting that new gov contract.
Maybe you can only watch so many nonsense videos. I assume I’m sadly wrong though.
this is the beginning of the end for the AI bubble
The end of the AI bubble has been beginning for years. The end of the beginning of the end of the bubble might take a few more years.
escalator up, elevator down
i agree that AI is a bubble of trash but shutting off a part that wasn’t worth the cost is not an indication of an end. They just reduced cost to extend the financial runway. From my point of view text and coding were more popular anyway.
It was used almost exclusively for slop and slop-based ads or videos that shouldn’t be slop. I was on there yesterday and some account had 2 videos of a woman in front of a plain wall talking for 15 seconds about tax implications for investments. A real human could have filed it with an iphone in 3 minutes.
But now that’s Google and Grok’s problems, I guess.
those 15seconds used alot of power, like megawatts.
If the audio and video is AI-generated I’m going to assume that the script is, too.
Oh no, it was very specific and hard to cram all the words in to the time. Typical Sora is that it’s either screaming or long pauses.
I’ve had to do training at work that I’m fairly certain was mostly AI generated. The pics and audio seemed to be. And even the questions that I had to get right in order to complete the training… Some of them just weren’t covered in the training. Come on
I had a safety thing about a year ago and at the end it asked a question the the effect of " there’s broken glass on the shop floor, what should you do" . I picked the option to use a broom and dust pan but apparently the correct tool for glass clean up is a pair of tongs…








