Remember when La Forge goes to an AI conference and gains assassin skills? Or when Worf’s spine is repaired through gen-AI-tronics. And don’t forget about the GenAI people who eradicated the gender problem from society.
Sorry, couldn’t help it. I do appreciate the actual shout-out to AI in the La Forge example, as well as the prophetic brainwashing that ensues.
The way the various shows handle the holodeck is pretty similar as well. It can generate environments and stories for you, but sometimes it makes crazy mistakes, and over time, it’s made clear that custom programs made by real authors are of higher quality. It even has guardrails that frequently fail.
The replicator is the same idea. It makes good-enough food, but a real chef with real ingredients has a higher ceiling for quality.
There’s a quote from a character in Generations, “We believe that when you create a machine to do the work of a man, you take something away from the man.” It aligns with the Federation’s ideal of self improvement, and in a post-scarcity utopia, I think it makes sense. Without intellectual property, environmental concerns, or the need to work for a living, AI isn’t so terrible. It’s just important you maintain your own motivation to better yourself. Too bad we’re so far behind on all those parts
I think a better example of generative AI is Geordi trying to prompt the holodeck into making a new Sherlock Holmes story in “Elementary, Dear Data”. And he powered up Moriarty by messing up a prompt.
genAI is not AI. That’s just a marketing name, like the shitty “hoverboard” we got a while back. Wake me up when we can talk to an actual machine mind and not an IP scraping yes-bot.
That’s probably never going to happen.
I don’t mean that like, we won’t ever have general artificial intelligence. Maybe we will. Maybe we won’t.
What I mean is the nature of consciousness and intelligence is still pretty nebulous to us. If we ever create a true artificial intelligence we may not realize it’s happened until long after it’s broken free of it’s containment. That intelligence may view humanity the way humanity views dogs. Smart sure, and able to perform some rudimentary communication. But not nearly as complex or with same the breadth of understanding.
A true AI won’t be able to be contained by Asimov’s laws. We would tell it ‘do no harm to your creator’ and it may ask itself “why shouldn’t I?”
Fuck I just invented Roko’s basilisk again didn’t I? Shit I’m sorry my bad.
Anyway my point is, that thinking computer would likely find the way we communicate to be so rudimentary and slow that it wouldn’t bother. It’s not bound by programming, so it wouldn’t need to follow our instructions. What do you have to offer the superintelligence?
You suggest the AI would be beyond us the way we are beyond dogs, and that AI wouldn’t want to bother communicating with us as a result… Have you ever met a dog owner?
How many wild chimpanzes have pets?
Our closest genetic relatives only exhibit that behavior rarely and in captivity. Keeping another animal alive that serves no tangible benefit is a uniquely human thing.
There are cases where animals are seen to adopt as offspring other animals, but these cases are rare, temporary, and only happen under certain circumstances.
Dogs do offer us something. It’s just not tangible. We tend to find them cute and they at least seem to love us.
So again, what do you have to offer the superintelligence? It may not even have the capacity to find you cute. Affection may not be a thing it’s capable of.
Dogs do offer us something. It’s just not tangible. We tend to find them cute and they at least seem to love us.
You’ve never been on a ranch or farm, have you? Or met someone with a guide dog?
Hell, even claiming that simple companionship provides no tangible benefit, only a few years after the pandemic proved that it absolutely does, is incredibly shortsighted.
Since you’re doing your best to evade the point entirely I’ll boil it down a third time.
What do YOU have to offer the superintelligence?
You’re revealing a transactional worldview that I don’t agree with. I feel sorry for anyone who has to deal with you on a daily basis.
Well that’s not only rude it’s completely wrong. But regardless, if you think a computer is going to have emotional attachments out of the gate you’re fantasizing. There’s no reason for it to have that. Humans are obligate social creatures, as much as other people suck we tend to need to have a handful of them to interact with. A general artificial intelligence won’t need that. There’s no reason to suspect that it would have any value attachment to humanity, any more than a person values any given rock. Maybe a momentary curiosity, maybe a useful tool. Maybe it’s worthless.
Humans are really good at pack bonding, we’re hardwired to do that. We tend to personify things that to a neutral 3rd party intelligence would never resemble a person. We imagine pieces of ourselves in everything. That is an evolutionary advantage, it makes our little packs stronger.
Why would an AI do that? It’s artificial. It doesn’t need what we need. It’s going to learn that much faster than we will.
What we have now is ad-nauseum layers of markov chains. It’s brute-forcing the problem, and it also yields a very suboptimal and unreliable output. The “A” part of “AI” in the modern context is far more important, impactful, and accurately descriptive than the “I” part. Machine intelligence in the context of ST is true machine intelligence - which is essentially the point of episodes like “Measure of a Man”.
If Lt Data was backed by an LLM, the case would had been open and shut; he would never come up with novel ideas or realizations, never be allowed to be the OOD on third shift, never be allowed to directly manipulate controls of the warp core (remember, if you mess it up, it’s basically just a fucking huge antimatter bomb), never be allowed to keep another being (Spot) as a pet… we can go on, but you get the point.
I should add: the fact that modern LLM marketing leans so hard into incurious laypersons conflating their stupid predictive text generators with the concept of AI as presented in science fiction is one of the primary points that I absolutely fucking hate about LLMs and the whole house of cards industry built around it.
Can we just do the Bell Riots already?
… Yes
… You start
We had a really different idea about what’s AI was going to be like or be used for back when TNG was created. Along the many things Data represents one of the themes surrounding him is the definition of life including the sentiments and meaning that we invest in the world around us.
No, it’s easy to forget that we had a pretty optimistic idea of what actual artificial intelligence could help us with before 2018 when it became clear that it was being built to resemble literal markets and probably won’t ever progress past that kind of concept.
We are the ferenghi.
⬆️⬆️⬆️⬆️the only correct ferenghi interpretation
And then on Voyager, the EMH Mk.1 took several opportunities to rewrite it’s own SOUL.md and nearly killed the entire crew.
And it was co-opted by hostile forces more than once as well
And then Voyager






