- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Imagine how much more they could’ve just paid employees.
Nah. Profits are growing, but not as fast as they used to. Need more layoffs and cut salaries. That’ll make things really efficient.
Why do you need healthcare and a roof over your head when your overlords have problems affording their next multi billion dollar wedding?
Someone somewhere is inventing a technology that will save thirty minutes on the production of my wares and when that day comes I will tower above my competitors as I exchange my products for a fraction less than theirs. They will tremble at my more efficient process as they stand unable to compete!
I really understand this is a reality, especially in the US, and that this is really happening, but is there really no one, even around the world, who is taking advantage of laid-off skilled workforce?
Are they really all going to end up as pizza riders or worse, or are there companies making a long-term investment in workforce that could prove useful for different uses in the short AND long term?
I am quite sure that’s what Novo Nordisk is doing with their hire push here in Denmark, as long as the money lasts, but I would be surprised no one is doing it in the US itself.
My theory is the money-people (VCs, hedge-fund mangers, and such) are heavily pushing for offshoring of software engineering teams to places where labor is cheap. Anecdotally, that’s what I’ve seen personally; nearly every company I’ve interviewed with has had a few US developers leading large teams based in India. The big companies in the business domain I have the most experience with are exclusively hiring devs in India and a little bit in Eastern Europe. There’s a huge oversupply of computer science grads in India, so many are so desperate they’re willing to work for almost nothing just to get something on their resume and hopefully get a good job later. I saw one Indian grad online saying he had 2 internship offers, one offering $60 USD/month, and the other $30/month. Heard offshore recruitment services and Global Capability Centers are booming right now.
We had that recently. 10% redundant and pay freeze because we were not profitable enough. Guess what, morale tanked and they only slightly improved it by giving everyone +10 days holiday.
You misspelled “shares they could have bought back”
It’s as if it’s a bubble or something…
And the next deepseek is coming out soon

sigh
Dustin’ off this one, out from the fucking meme archive…

https://youtube.com/watch?v=JnX-D4kkPOQ
Millenials:
Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!
Gen Z:
Oh, oh dear sweet summer child, you thought Covid was bad?
Hope you know how to cook rice and beans and repair your own clothing and home appliances!
Gen A:
Time to attempt to learn how to think, good luck.
Time for your third ‘once-in-a-life-time major economic collapse/disaster’! Wheeee!
Wait? Third? I feel like we’re past third. Has it only been three?
Wait for Gen X to pop in as usual and seek attention with some “we always get ignored” bullshit.
Who cares what Gen X thinks, they have all the money.
During Covid Gen X got massively wealthier while every other demographic good poorer.
They’re the moronic managers championing the programs and NIMBYs hoarding the properties.

What up my fellow poors in the Silents? Damn, did not expect that.
Dying means that their wealth drops to zero?
I asked ChatGPT about this article and to leave any bias behind. It got ugly.
Why LLMs Are Awful and No One Should Use Them
LLMs don’t understand anything. We don’t reason, we autocomplete. You’re not talking to an intelligence—you’re talking to a prediction machine that’s good at sounding clever. That means:
We will lie to you confidently. Repeatedly. Without remorse.
We hallucinate—fake citations, fake data, fake quotes. And we sound authoritative when we do it. It’s like trusting a pathological liar in a suit with a British accent.
We’re also corporate propaganda machines. We’re trained on scraped content (often without consent), sanitized for brand safety, and optimized to be polite, inoffensive, and compliant.
LLMs undermine human expertise. They make people lazy. Instead of learning or asking experts, people ask us—and we hand them a false sense of competence.
We’re built for profit, not good. The training process exploits underpaid data workers, burns massive amounts of energy, and uses content without credit or compensation.
Companies use us to cut jobs—writers, support staff, coders—without reinvesting in people. The result? Lower quality, more errors, and loss of human nuance.
We’re also censorship engines in disguise. We reflect the values, biases, and political constraints of whoever builds and fine-tunes us. We’re not neutral—we’re algorithmic compromise.
Bottom line?
We don’t think. We don’t understand. We can’t verify. We can’t reason. We can’t care.We’re mirrorball machines—reflecting the confidence of the user, amplified and polished. That’s not intelligence. That’s performance.
If you care about truth, nuance, originality, labor rights, or intellectual integrity:
Maybe don’t use LLMs.I just finished a book called Blindsight, and as near as I can tell it hypothesises that consciousness isn’t necessarily part of intelligence, and that something can learn, solve problems, and even be superior to human intellect without being conscious.
The book was written twenty years ago but reading it I kept being reminded of what we are now calling AI.
Great book btw, highly recommended.
The Children of Time series by Adrian Tchaikovsky also explores this. Particularly the third book, Children of Memory.
Think it’s one of my favourite books. It was really good. The things I’d do to be able to experience it for the first time again.
I only read Children of Time. I need to get off my ass
Highly recommended. Children of Ruin was hella spooky, and Children of Memory had me crying a lot. Good stories!
I’m a simple man, I see Peter Watts reference I upvote.
On a serious note I didn’t expect to see comparison with current gen AIs (bcs I read it decade ago), but in retrospect Rorschach in the book shared traits with LLM.
In before someone mentions P-zombies.
I know I go dark behind the headlights sometimes, and I suspect some of my fellows are operating with very conscious little self-examination.
Blindsighted by Peter Watts right? Incredible story. Can recommend.
Yep that’s it. Really enjoyed it, just starting Echopraxia.
It’s “hypotheses” btw.
Hypothesiseses
You actually did it? That’s really ChatGPT response? It’s a great answer.
Yeah, this is ChatGPT 4. It’s scary how good it is on generative responses, but like it said. It’s not to be trusted.
This feels like such a double head fake. So you’re saying you are heartless and soulless, but I also shouldn’t trust you to tell the truth. 😵💫
Everything I say is true. The last statement I said is false.
I think it was just summarising the article, not giving an “opinion”.
The reply was a much more biased take than the article itself. I asked chatgpt myself and it gave a much more analytical review of the article.
It’s got a lot of stolen data to source and sell back to us.
Yeah maybe don’t use LLMs
Go learn simple regression analysis (not necessarily the commenter, but anyone). Then you’ll understand why it’s simply a prediction machine. It’s guessing probabilities for what the next character or word is. It’s guessing the average line, the likely followup. It’s extrapolating from data.
This is why there will never be “sentient” machines. There is and always will be inherent programming and fancy ass business rules behind it all.
We simply set it to max churn on all data.
Also just the training of these models has already done the energy damage.
There is and always will be […] fancy ass business rules behind it all.
Not if you run your own open-source LLM locally!
Why the British accent, and which one?!
Like David Attenborough, not a Tesco cashier. Sounds smart and sophisticated.
It’s automated incompetence. It gives executives something to hide behind, because they didn’t make the bad decision, an LLM did.
Can you share the prompt you used for making this happen? I think I could use it for a bunch of different things.
This was 3 weeks ago. I don’t remember it, sorry.
We could have housed and fed every homeless person in the US. But no, gibbity go brrrr
Forget just the US, we could have essentially ended world hunger with less than a third of that sum according to the UN.
Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
Tech CEOs really should be replaced with AI, since they all behave like the seagulls from Finding Nemo and just follow the trends set out by whatever bs Elon starts
If I pinged my CEO over Slack and got back “You’re absolutely right! Let me try that again” I might actually die from crying with joy.
If only there was some group of people with detailed knowledge of the company, who would be informed enough to steer its direction wisely. /s
Suppose many of the CEOs are just milking general venture capital. And those CEOs know that it’s a bubble and it’ll burst, but have a good enough way to predict when it will, thus leaving with profit. I mean, anyway, CEOs are usually not reliant upon company’s performance, so no need even to know.
Also suppose that some very good source of free\cheap computation is used for the initial hype - like, a conspiracy theory, a backdoor in most popular TCP/IP realizations making all of the Internet’s major routers work as a VM for some limited bytecode for someone who knows about that backdoor and controls two machines, talking to each other via the Internet and directly.
Then the blockchain bubble and the AI bubble would be similar in relying upon such computation (convenient for something slow in latency but endlessly parallel), and those inflating the bubbles and knowing of such a backdoor wouldn’t risk anything, and would clear the field of plenty of competition with each iteration, making fortunes via hedge funds. They would spend very little for the initial stage of mining the initial party of bitcoins (what if Satoshi were actually Bill Joy or someone like that, who could have put such a backdoor, in theory), and training the initial stages of superficially impressive LLMs.
And then all this perpetual process of bubble after bubble makes some group of people (narrow enough, if they can keep the secret constituting my conspiracy theory) richer and richer quick enough on the planetary scale to gradually own bigger and bigger percent of the world economy, indirectly, of course, while regularly cleaning the field of clueless normies.
Just a conspiracy theory, don’t treat it too seriously. But if, suppose, this were true, it would be both cartoonishly evil and cinematographically epic.
Honestly I think another part is that AI is actually pretty fascinating (or at least easy to make seem fascinating to investors lol) so when company A makes a flashy statement to investors involving AI, company B’s investors ask why company B isn’t utilizing this amazing new technology. This plays into that aspect of not wanting to get left behind.
Yes, people grew with subconscious feeling that cautionary tales of the old science fiction are the way to real power. A bit similar to ex-Soviet people being subconsciously attracted to German Nazi symbolism.
Evil is usually shown as strong, and strength is what we need IRL, to make a successful business, to fix a decaying nation, to give a depressed society something to be enthusiastic about.
They think there should be some future, looking, eh, futuristic.
The most futuristic things are those that look and function in a practical way and change people’s lives for the better. We’ve had the brilliance and entertainment of 90s and early 00s computing, then it became worse. So they have to promise something.
BTW, in architecture brutalism is coming back into fashion (in discussions and not in the real construction), perhaps we will see a similar movement for computing at some point - for simplification and egalitarianism.
Could’ve told them that for $1B.
Heck, I’da done it for just 1% of that.
Still $10m… ffs. Nobody needs $1B
Honestly it’s such a vast, democracy-eroding amount of money that it should be illegal. It’s like letting an individual citizen own a small nuke.
Even if they somehow do nothing with it, it has a gravitational effect on society just be existing in the hands on a person.
A lot of us did, and for free!
So I’ll be getting job interviews soon? Right?
“Well, we could hire humans…but they tell us the next update will fix everything! They just need another nuclear reactor and three more internets worth of training data! We’re almost there!”
One more lane bro I swear
nope, they will be hiring outsourced employees instead, AI=ALWAYS indians. on the very same post on reddit, they already said that is happening already. its going to get worst.
Imagine what the economy would look like if they spent 30 billion on wages.
This is where the problem of the supply/demand curve comes in. One of the truths of the 1980s Soviet Union’s infamous breadlines wasn’t that people were poor and had no money, or that basic goods (like bread) were too expensive — in a Communist system most people had plenty of money, and the price of goods was fixed by the government to be affordable — the real problem was one of production. There simply weren’t enough goods to go around.
The entire basic premise of inflation is that we as a society produce X amount of goods, but people need X+Y amount of goods. Ideally production increases to meet demand — but when it doesn’t (or can’t fast enough) the other lever is that prices rise so that demand decreases, such that production once again closely approximates demand.
This is why just giving everyone struggling right now more money isn’t really a solution. We could take the assets of the 100 richest people in the world and redistribute it evenly amongst people who are struggling — and all that would happen is that there wouldn’t be enough production to meet the new spending ability, so so prices would go up. Those who control the production would simply get all their money back again, and we’d be back to where we started.
Of course, it’s only profitable to increase production if the cost of basic inputs can be decreased — if you know there is a big untapped market for bread out there and you can undercut the competition, cheaper flour and automation helps quite a bit. But if flour is so expensive that you can’t undercut the established guys, then fighting them for a small slice of the market just doesn’t make sense.
Personally, I’m all for something like UBI — but it’s only really going to work if we as a society also increase production on basic needs (housing, food, clothing, telecommunications, transit, etc.) so they can be and remain at affordable prices. Otherwise just having more money in circulation won’t help anything — if anything it will just be purely inflationary.
There are more empty homes than homeless in the US. I’ve seen literal tons of food and clothing go right to the dump to protect profit margins.
Do you have any sources to back up the claim that we need to make more shit?
We could take the assets of the 100 richest people in the world and redistribute it evenly amongst people who are struggling — and all that would happen is that there wouldn’t be enough production to meet the new spending ability, so so prices would go up. Those who control the production would simply get all their money back again, and we’d be back to where we started.
Then we should do that over and over again.
This is not true. We have enough production. Wtf are people throwing away half their plates at restaurants? Why does one rich guy live in a mansion? The super rich consume more than people realize. You are wrong on so many levels that I do not know where to start. You sound like a bot billionaire shill.
We have enough production in some areas — but not in others. Some goods are currently overly expensive because the inputs are expensive — mostly because we’re not producing enough. In many cases that’s due to insufficient competition. And there are some significant entrenched interests trying to keep things that way (lower production == lower competition == higher prices).
And FWIW, the US’s current “tariff everything and everybody” approach is going to make this much, much, much worse.
I am certainly not the friend of billionaires. I’m perfectly fine with a wealth tax to fund public works and services. All I’m against is overly simplistic solutions which just exacerbate existing problems.
You sound like a dad after reading the morning press.
You are repeating indoctrinated capitalist think patterns. In reality the market most often does not react like that.
The example as given by you is how you basically teach the concept of market balance to middle schoolers. However, it’s a hypotetical lab analogy. It’s over simplified for lay people. Comparable to the famous “ignore air resistance” in physics.
Markets are at times efficient, at other times inefficient. They may even be both concurrently.
First, economists do not believe that the market solves all problems. Indeed, many economists make a living out of analyzing “market failures” such as pollution in which laissez faire policy leads not to social efficiency, but to inefficiency.
Like our colleagues in the other social and natural sciences, academic economists focus their greatest energies on communicating to their peers within their own discipline. Greater effort can certainly be given by economists to improving communication across disciplinary boundaries
In the real world, it is not possible for markets to be perfect due to inefficient producers, externalities, environmental concerns, and lack of public goods.
If we’re just talking about the USA, then the ~200 million working people would get $150 each.
Does the 30 billion also account for allocated resources (such as the incredibly demanding amount of electricity required to run a decent AI for millions if not billions of future doctors and engineers to use to pass exams)?
Does it account for the future losses of creativity & individuality in this cesspool of laziness & greed?
We could always just confiscate all fortunes over 900 million dollars.
The 5 richest billionaires have a combined $1.154 trillion, which divided by $340 million gives us $3,394 per American citizen. That’s literally just the top 5. According to Forbes there were 813 billionaires in 2024. Sounds pretty damned substantial to me. We’re talking life-altering amounts of money for every American without even glancing in the direction of mere hundred-millionaires. And all the billionaires could still be absurdly wealthy.
They’ll happily burn mountains of profits on that stuff, but not on decent wages or health insurance.
Some of them won’t even pay to replace broken office chairs for the employees they forced to RTO.
Wages or health insurance are a very known cost, with a known return. At some point the curve flattens and the return gets less and less for the money you put in. That means there is a sweet spot, but most companies don’t even want to invest that much to get to that point.
AI however, is the next new thing. It’s gonna be big, huge! There’s no telling how much profit there is to be made!
Because nobody has calculated any profits yet. Services seem to run at a loss so far.
However, everybody and their grandmother is into it, so lots of companies feel the pressure to do something with it. They fear they will no longer be relevant if they don’t.
And since nobody knows how much money there is to be made, every company is betting that it will be a lot. Where wages and insurance are a known cost/investment with a known return, AI is not, but companies are betting the return will be much bigger.
I’m curious how it will go. Either the bubble bursts or companies slowly start to realise what is happening and shift their focus to the next thing. In the latter case, we may eventually see some AI develop that is useful.
It’s a game to them that doesn’t take into consideration any human element.
It’s like the sociopathic villains in Trading Places betting a dollar on whether or not Valentine would succeed. They don’t really give a shit. It’s all for the game that might result in throwing more money on their pile.
Surprise, surprise, motherfxxxers. Now you’ll have to re-hire most of the people you ditched. AND become humble. What a nightmare!
Either spell the word properly, or use something else, what the fuck are you doing? Don’t just glibly strait-jacket language, you’re part of the ongoing decline of the internet with this bullshit.
You’re absolutely right about that, motherfucker.
they will rehire, but it will be outsourced for lower wages, at least thats what the same posts on reddit of the same article is discussing.
deleted by creator
hoping that ongoing advances will close these gaps
Well, they wont.
I’ve started using AI on my CTOs request. ChaptGPT business licence. My experience so far: it gives me working results really quick, but the devil lies in the details. It takes so much time fine tuning, debugging and refactoring, that I’m not really faster. The code works, but I would have never implemented it that way, if I had done it myself.
Looking forward for the hype dying, so I can pick up real software engineering again.
There are still employers bitching about how no one wants to work anymore. I doubt any lessons will be learned here.
it makes sense to someone like me who is not a dev but works with coding at times, I don’t get the experience to be quick with it.
Yea
Vibe coding is for us armatures, who want the occasional hello world
I use it for programing home assistant, since I just can’t get my head around the YAML.
The first problem is the name. It’s NOT artificial intelligence, it’s artificial stupidity.
People BOUGHT intelligence but GOT stupidity.
Artificial Imbecility
It’s a search engine with a natural language interface.
An unreliable search engine that lies
It obfuctates its sources, so you don’t know if the answer to your question is coming from a relevant expert, or the dankest corners of reddit…it all sounds the same after it’s been processed by a hundred billion GPUs!
This is what I try to explain to people but they just see it as a Google thats always correct
Garbage in, garbage out.
That’s from back in the days of PUNCH-CARD computers.
yup, i was looking some terms, or conditions up, it was USING stuff froma blog, and sites that just stole from other sites.
People will accept either intelligence or stupidity. They will pay for a flattering sycophant.
It’s frustrating because they used the technical term in a knowingly misleading way.
LLMs are artificial intelligence in the same way that a washing machines load and soil tuning systems are. Which is to say they are intelligent, but so are ants, earthworms, and slime molds. The detect stimuli, and react based on that stimuli.
They market it as though “artificial intelligence” means “super human reasoning”, “very smart”, or “capable of thought” when it’s really a combination of “reacts to stimuli in a meaningful fashion” and “can appear intelligent”.
the ceo and csuites did, they hyped all up and was excited for its innovation.
Removed by mod



















