I knew I wasn’t interested in A.I. for a while now, but I think I can finally put it into words.
A.I. is supposed to be useful for things I’m not skilled at or knowledgeable about. But since I’m not skilled at or knowledgeable about the thing I’m having A.I. do for me, I have no way of knowing how well A.I. accomplished it for me.
If I want to know that, I have to become skilled at or knowledgeable about the thing I need to do. At this point, why would I have A.I. do it since I know I can trust I’m doing it right?
I don’t have a problem delegating hiring people who are more skilled at or more knowledgeable about something than me because I can hold them accountable if they are faking.
With A.I., it’s on me if I’m duped. What’s the use in that?
This is the simple checklist for using LLMs:
- You are the expert
- LLM output is exclusively your input
All other use is irresponsible. Unless of course the knowledge within the output isn’t important.
The core problem with “AI” is the same as any technology: capitalism. It’s capitalism that created this imaginary grift. It’s capitalism that develops and hypes useless technology. It’s capitalism that ultimately uses every technology for violent control.
The problem isn’t so much that it’s useless to almost everybody not in on the grift… . The bigger problem is that “AI” is very useful to people who want to commit genocide, imprisonment, surveillance, etc. sloppily, arbitrarily, and with the impunity that comes from holding an “AI” as responsible.
For the tasks that LLMs are actually decent at, like writing letters, the idea is that you save time even if you’re knowledgable enough to do it yourself, and even if you still need to do some corrections (and you’re right that you shouldn’t use AI for a task that you’re not knowledgable about - those corrections are crucial). One of the big issues of LLMs is that they are being sold as a solution for lots of tasks that they’re horrible at.
Maybe I’m just too particular about things.
I cannot imagine an LLM world write the way I want it to be written.deleted by creator
I would argue that’s actually the last situation you’d want to use an LLM. With numbers like that, nobody’s going to review each and every letter with the attention things generated by an untrustworthy agent ought to get. This sounds to me like it calls for a template. Actually, it would be pretty disturbing to hear that letters like that aren’t being generated using a template and based on changes in the system.
If you are skilled at task or knowledgeable in a field, you are better able to provide a nuanced prompt that will be more likely to give a reasonable result, and you can also judge that result appropriately. It becomes an AI-assisted task, rather than an AI-accomplished one. Then you trade your brainpower and time that you would have spent doing that task for a bit of free time and a scorched planet to live on.
That said, once you realize how often a “good” prompt in a field you are knowledgeable in still yields shit results, it becomes pretty clear that the mediocre prompts you’ll write for tasks you don’t know how to do are probably going to give back slop (so your instinct is spot on). I think AI evangelist users are succumbing to the Gell-Mann Amnesia Effect.



