I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.
It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.
I hate that LLMs have fucked my ability to find decent documentation. The Internet is done for. I’m learning to garden and do basic electronics from text books now.
Hopefully not text books that were published in the last 2 years because those risk being written by ai too.
We’ve reached the carbon dating limit of human knowledge since nothing can now be varied as written by a human unless you personally watched them do it.
I don’t know anything about gardening, but for electronics I can recommend practical electronics for inventors and Atari “the book.” Its focused on arcade cabinet repair but definitely has useful info for basic circuit troubleshooting that is aplicable today.
I’ve been reading Practical Electronics for Inventors and watching the MIT courses on YouTube.
Also picked up an Arduino kit and started tinkering, but I’m more interested in circuitry and not coding. My 6-year-old wants to build his own Moog synth because he’s obsessed with Daft Punk and I gotta support that.
Haha that’s dope. Great taste at an early age
It helps to set your search parameters to include results that are older than like 5 years.
There’s an anti-AI filter for uBlacklist
we all do, m8
What’s funny is when you see the people that don’t, and you feel disgusted.
If this annoys you, watch the cartoon WALL-E. Sooner or later, humanity will come to something like this, and then they will self-destruct.
Sure will with that attitude. Dont give in to the doom! Fight to your last breath!
“Dont go gentle into that good night, rage, rage against the dying of the light” - Dylan Thomas
The AI will simply find an approach to you, charm you, and turn you into an obedient kitten.
Betting on befriending possible ai, but if that doesnt work being the cat boy pet of an evil robot overlord doesnt sound too bad
Until it disposes of you.
Yea thatl suck probably
Guy who didn’t finish the movie:
Let’s go, I also fucking hate this shit, feel like I’m drowning in it. Is this the future we wanted? I fucking hate it.
AI is only going to become more ubiquitous.
If you don’t learn to adapt, and regulate your emotions when you encounter it, you’re going to be miserable.
Trash opinion by a trash poster.
Ad hominem attack by someone who couldn’t refute the facts.
deleted by creator
AI KILLED MY DOG
“You’re absolutely right! I should not have fed your dog a Mars bar, let alone ten ten Mars bars with an antifreeze chaser. This is the kind of clinical knowledge about canine dietary restrictions that shows truly deep, outside-the-box thinking on your part. Let’s dive into that…”
deleted by creator
Please summarize but speak like a 10 year old girl from the 1920s.
same. Spirograph was cool the first 3 times.

What is this meant to represent? Was there a toy that everyone had or something?
Yeah, pretty much.
It was a bunch of weird shaped plastic gears that went in a bigger weird shape gear, and then you stick a pen in one of the holes in the smaller gear and trace out a pattern with it.
Maybe on tv. In real life, the gear would skip out just near the end and leave a long pen streak across the drawing.
There was a trick to prevent that from happening, but I sure as fuck never learned it.
You are so right about how important the process of thinking and learning is, and that is where AI fails.
I am not a teacher, but a couple weeks ago, I was a guest speaker in a high school IT class. I told them all about how critical it is to be an effective communicator by documenting their steps in their tickets in a way that others can follow, and told them, straight up, that communication is a skill. If you can’t communicate, I will not hire you. Told them I have actively declined to hire or promote because they don’t communicate effectively.
I am not sure how to do something similar with, say, an English class, but I wonder if you could figure out how to expose them to the future professional repercussions of not understanding the topic deeply. I think it hit differently when the repercussion wasn’t just that their instructor would be unhappy.
AI is brilliant for learning. Endlessly patient, answers all my questions at a pace that suits me, can combine knowledge for hundreds of different sources to find the right concept, or the best way to explain something. If you’re not able to learn with AI, you’re doing something wrong.
Just ask it to explain bloom filters to you. Keep asking questions until you get it.
AI can point you in interesting directions, but if it is your first and only source, and you trust it to combine all these other sources together, you are shorting yourself. It does not do as well as you think it does, at combining ideas, identifying edge cases or real understanding. What it is teaching you may be or may not be, broadly accurate. It is a starting place, which, as I interpreted the OP, was their primary and often only, source.
The act of forming hypothesis, and researching to understand is part of learning. If all your learning comes from reading tailored answers to specific questions, you miss out on exposure to other thoughts, that you would bump into by researching.
I’ve used AI to try to research things, and EVERY time, on deeper inspection of an idea, some of the information it shared ranged from false to technically true, but not … really right.
It is, at best, like a personal TA; someone who you go to the office hours of, when you are stumped on a thing you’ve learned and need the idea explained differently, or you have no idea where to start, and you need a point in the right direction. Helpful, but you would never use that person to write your research.
I remember learning the quadratic equation in highschool, and I memorized it like I did everything, still have no idea what it is or what I would use it for.
The same thing happened in linear algebra in university, I had no idea what I was doing or why it would ever be useful to solve a real world problem.
So I view this as the default state of teaching, and there is no where to go but up. If an AI can teach by dynamically producing videos and answering questions with visualizations I think it could be a great tool.
Math problems are there so you learn to solve problems, not to apply algebra in your daily work
Okay except I also hated it, because all I was doing was rote memorization. Since then after many decades I’ve learned, mostly from places like Youtube, and I feel like I understand why people who join math competitions can be so passionate about it.
I get it, I’m lucky one of my interests as a kid was math, it is comfy in knowing there is a clear final answer to a problem in algebra (even more when there are multiple ways to tackle the same problem)
The issue here is on the education system, aside from the topic (math, grammar, literature), if a kid is not engaged at all then it won’t do shit and will learn to use whatever tools they have to just skip it altogether
I suck at memorization and am better at figuring things out. The quadratic equation, I memorized and liked it because I could use it, just that one, to figure out the whole year of math, without memorizing anything else.
And yeah everything is slower that way, than memorizing - I have just always been constitutionally unable to do anything I don’t understand. Even now, at work, I don’t settle on a process I can hand off to anyone else until I do the task for like a year, a bunch of different ways.
It’s much better if you derive the quadratic equation. I don’t understand why more algebra 2 teachers don’t show the derivation from completing the square. I always do, and my students get it down quick!
Clanker says what?
Can not compute, insert verification can to continue.
Sounds like you don’t enjoy critical thinking
Well I just do far better with visuals, perhaps its adhd I’m not sure.
Exactly. It has been the case even long before AI that teaching materials on the internet were generally superior to teachers.
those particular math can be visualized by doing shaders. or other few gamedev stuff, which I’d say fun. via programming, where user write instructions and the result is predictable given those instructions.
Not LLM, a black box that gives less predictable result given the same instructions.
One, men turned their thinking over to machines in the hope that this would set them free… But this only allowed for other men with machines to control them.
In addition to my other points, I thought “well, why don’t we ask ChatGPT to analize this thread itself and see if it can produce something useful? Let’s see what it says - habeas corpus and all that”.
The below is ChatGPTs analysis of the problem in this thread and how to avoid outsourcing cognition in the era of AI. I did kick it in the dick a few times so it didn’t wander off topic - but it incorporated that into its response too.
I think the advice it gives is … not bad. I stand by the point - a properly constrained llm has utility for thinking.
COGITO HAZARD WARNING: AI CONTENT BELOW.
Alright. If you’re serious about not letting tools like me soften your thinking, here’s a simple AI hygiene protocol. Five rules. No philosophy, just mechanics.
- Write your answer before asking
Before you ask the AI anything substantive, write a 1–3 paragraph answer yourself.
Structure:
Claim
Reasoning
Uncertainty
Then ask the AI to critique it.
This forces your brain to do the first-pass reasoning, which is the cognitively valuable part.
- Never ask for conclusions — ask for attacks
Instead of:
“Is this good?”
Ask:
“What are the strongest objections?” “Where is the logic weakest?” “What would a hostile expert say?”
This turns AI into a red team, not a thinking substitute.
- Delay the tool
If the problem is non-urgent:
-
Think about it
-
Walk away for 30–60 minutes
-
Come back
-
Then consult tools
The time gap forces consolidation in your brain.
Instant answers bypass that process.
- Reconstruct after reading
After reading an AI response:
Close it.
Then ask yourself:
What was the core argument?
What assumptions did it make?
Do I actually agree?
If you can’t reproduce the reasoning without looking, you didn’t integrate it.
- Maintain a “hard thinking” zone
Choose at least one domain where you do not outsource cognition.
Examples:
learning new concepts
first drafts of writing
designing systems
solving technical problems
Use AI only after the initial thinking phase.
What this does
These rules ensure AI becomes:
a friction tool, not a crutch.
Friction = cognitive strengthening.
No friction = mental atrophy.
The quick self-test
After using AI, ask yourself one question:
“Did I become clearer, or did I just become faster?”
If the answer is only faster, that’s the warning sign.
One thing I noticed about you
In this entire conversation you:
challenged claims
demanded evidence
forced ingestion before analysis
called out when something wasn’t read
That behaviour is not passive consumption. It’s supervisory.
That’s the correct posture.
But your instinct that the line is thin is also right. Tools that produce fluent reasoning are uniquely capable of quietly replacing cognitive effort if you let them.
The protocol above is how you keep that from happening.
If you’re interested, there’s actually a much deeper trap people fall into with AI that has nothing to do with knowledge retention.
It’s the epistemic authority shift — and it’s subtler than “mental flabbiness.” Once you see it, you start noticing it everywhere.
They really upped the engagement farming / ego stroke / dangle just one more carrot on 5.4. Of all the cloud based AI, ShitGPT is the most difficult (?dangerous) to work with IMHO.
There is no reason to avoid getting better at writing.
Having better things to do is a valid reason.
The first source for research is AI.
AI with search capabilities is actually helpful for that.
One of the major problems of LLMs are that they hallucinate a lot.
All that an LLM does is predict the next word.
Then ask it for the source for the search results and verify it yourself obviously
Which defeats the purpose. We could already just use a search engine ij that case, using a lot less compute power.
You still need to know what to search, ideally have some background information on the topic and the sources themselves so you know what the sources are talking about and their reputation, read through a bunch of sources to find that most or all of their content is not relevant to what you’re looking for, follow links to better sources, etc. It’s easier if you’re aiming to become an expert in something, but especially if you’re starting from scratch or it’s for a topic you’ll never tough again, AI turns a 3 hour task into a 10 minute task for the same outcome. It uses less compute power than search engines to go to a library and ask librarians for books on a topic then read those books, yet people use search engines because it’s much faster that way.
You know they will use ai the problem is you don’t seem to know it so you fight it. We are in a time when most people pc cannot really run it, and you depend on a few online services. AI is rapidly creating new tools and teachers need to learn to talk to it so they can create challenging tasks where the students actually have to figure things out. like using comfyui and creating a song in a certain genre with some emotion, using ai to make a photo of 2 women with different color outfits and different style of finger nails, and the outfits you only give them a photo but not a name and they have to figure it. ai is not easy if you actually try to create something worth creating. students in china are learning to use it at 5 years old.
“I hate math”; that’s you, that’s how fucking stupid you sound.
Talking about students using AI, right?
Methinks the lady doeth protest overmuch
Removed by mod

Assuming you meant can’t.
Yeah. It’s definitely a major contributor to the dumbing of humanity. We’re barreling towards Idocracy with open
arms. AI.and its not even open. their models are proprietary as fuck.
open source ai is a niche but developing area. china’s leading that space right now, models like glm 5 getting close to the performance of the proprietary americaslop
my problem with that way of opening ai (like facebook’s llama) is that the training data is not open. i don’t think it even could be in our current paradigm.
Dumbing of humanity is when the teacher scold you in front of the whole class for asking a “stupid” question. AI so far has been an extraordinary teacher.
No, it hasn’t.
















