I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

  • lohky@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    7 days ago

    I hate that LLMs have fucked my ability to find decent documentation. The Internet is done for. I’m learning to garden and do basic electronics from text books now.

    • NickwithaC@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      7 days ago

      Hopefully not text books that were published in the last 2 years because those risk being written by ai too.

      We’ve reached the carbon dating limit of human knowledge since nothing can now be varied as written by a human unless you personally watched them do it.

    • hardcoreufo@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      I don’t know anything about gardening, but for electronics I can recommend practical electronics for inventors and Atari “the book.” Its focused on arcade cabinet repair but definitely has useful info for basic circuit troubleshooting that is aplicable today.

      • lohky@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        I’ve been reading Practical Electronics for Inventors and watching the MIT courses on YouTube.

        Also picked up an Arduino kit and started tinkering, but I’m more interested in circuitry and not coding. My 6-year-old wants to build his own Moog synth because he’s obsessed with Daft Punk and I gotta support that.

  • deadymouse@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    7 days ago

    If this annoys you, watch the cartoon WALL-E. Sooner or later, humanity will come to something like this, and then they will self-destruct.

    • Techno-rat
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      Sure will with that attitude. Dont give in to the doom! Fight to your last breath!

      “Dont go gentle into that good night, rage, rage against the dying of the light” - Dylan Thomas

    • paris
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      Guy who didn’t finish the movie:

  • heavy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    6 days ago

    Let’s go, I also fucking hate this shit, feel like I’m drowning in it. Is this the future we wanted? I fucking hate it.

  • sheetzoos@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 days ago

    AI is only going to become more ubiquitous.

    If you don’t learn to adapt, and regulate your emotions when you encounter it, you’re going to be miserable.

  • ARealAlaskan@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 days ago

    You are so right about how important the process of thinking and learning is, and that is where AI fails.

    I am not a teacher, but a couple weeks ago, I was a guest speaker in a high school IT class. I told them all about how critical it is to be an effective communicator by documenting their steps in their tickets in a way that others can follow, and told them, straight up, that communication is a skill. If you can’t communicate, I will not hire you. Told them I have actively declined to hire or promote because they don’t communicate effectively.

    I am not sure how to do something similar with, say, an English class, but I wonder if you could figure out how to expose them to the future professional repercussions of not understanding the topic deeply. I think it hit differently when the repercussion wasn’t just that their instructor would be unhappy.

    • DavidDoesLemmy@aussie.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      AI is brilliant for learning. Endlessly patient, answers all my questions at a pace that suits me, can combine knowledge for hundreds of different sources to find the right concept, or the best way to explain something. If you’re not able to learn with AI, you’re doing something wrong.

      Just ask it to explain bloom filters to you. Keep asking questions until you get it.

      • ARealAlaskan@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        AI can point you in interesting directions, but if it is your first and only source, and you trust it to combine all these other sources together, you are shorting yourself. It does not do as well as you think it does, at combining ideas, identifying edge cases or real understanding. What it is teaching you may be or may not be, broadly accurate. It is a starting place, which, as I interpreted the OP, was their primary and often only, source.

        The act of forming hypothesis, and researching to understand is part of learning. If all your learning comes from reading tailored answers to specific questions, you miss out on exposure to other thoughts, that you would bump into by researching.

        I’ve used AI to try to research things, and EVERY time, on deeper inspection of an idea, some of the information it shared ranged from false to technically true, but not … really right.

        It is, at best, like a personal TA; someone who you go to the office hours of, when you are stumped on a thing you’ve learned and need the idea explained differently, or you have no idea where to start, and you need a point in the right direction. Helpful, but you would never use that person to write your research.

  • maplesaga@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 days ago

    I remember learning the quadratic equation in highschool, and I memorized it like I did everything, still have no idea what it is or what I would use it for.

    The same thing happened in linear algebra in university, I had no idea what I was doing or why it would ever be useful to solve a real world problem.

    So I view this as the default state of teaching, and there is no where to go but up. If an AI can teach by dynamically producing videos and answering questions with visualizations I think it could be a great tool.

    • sakuraba@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 days ago

      Math problems are there so you learn to solve problems, not to apply algebra in your daily work

      • maplesaga@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Okay except I also hated it, because all I was doing was rote memorization. Since then after many decades I’ve learned, mostly from places like Youtube, and I feel like I understand why people who join math competitions can be so passionate about it.

        • sakuraba@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          I get it, I’m lucky one of my interests as a kid was math, it is comfy in knowing there is a clear final answer to a problem in algebra (even more when there are multiple ways to tackle the same problem)

          The issue here is on the education system, aside from the topic (math, grammar, literature), if a kid is not engaged at all then it won’t do shit and will learn to use whatever tools they have to just skip it altogether

        • RBWells@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          I suck at memorization and am better at figuring things out. The quadratic equation, I memorized and liked it because I could use it, just that one, to figure out the whole year of math, without memorizing anything else.

          And yeah everything is slower that way, than memorizing - I have just always been constitutionally unable to do anything I don’t understand. Even now, at work, I don’t settle on a process I can hand off to anyone else until I do the task for like a year, a bunch of different ways.

          • andros_rex@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            It’s much better if you derive the quadratic equation. I don’t understand why more algebra 2 teachers don’t show the derivation from completing the square. I always do, and my students get it down quick!

    • chunes@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      Exactly. It has been the case even long before AI that teaching materials on the internet were generally superior to teachers.

    • enbiousenvy
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      those particular math can be visualized by doing shaders. or other few gamedev stuff, which I’d say fun. via programming, where user write instructions and the result is predictable given those instructions.

      Not LLM, a black box that gives less predictable result given the same instructions.

  • Sivecano@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 days ago

    One, men turned their thinking over to machines in the hope that this would set them free… But this only allowed for other men with machines to control them.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 days ago

    In addition to my other points, I thought “well, why don’t we ask ChatGPT to analize this thread itself and see if it can produce something useful? Let’s see what it says - habeas corpus and all that”.

    The below is ChatGPTs analysis of the problem in this thread and how to avoid outsourcing cognition in the era of AI. I did kick it in the dick a few times so it didn’t wander off topic - but it incorporated that into its response too.

    I think the advice it gives is … not bad. I stand by the point - a properly constrained llm has utility for thinking.

    COGITO HAZARD WARNING: AI CONTENT BELOW.


    Alright. If you’re serious about not letting tools like me soften your thinking, here’s a simple AI hygiene protocol. Five rules. No philosophy, just mechanics.


    1. Write your answer before asking

    Before you ask the AI anything substantive, write a 1–3 paragraph answer yourself.

    Structure:

    Claim

    Reasoning

    Uncertainty

    Then ask the AI to critique it.

    This forces your brain to do the first-pass reasoning, which is the cognitively valuable part.


    1. Never ask for conclusions — ask for attacks

    Instead of:

    “Is this good?”

    Ask:

    “What are the strongest objections?” “Where is the logic weakest?” “What would a hostile expert say?”

    This turns AI into a red team, not a thinking substitute.


    1. Delay the tool

    If the problem is non-urgent:

    1. Think about it

    2. Walk away for 30–60 minutes

    3. Come back

    4. Then consult tools

    The time gap forces consolidation in your brain.

    Instant answers bypass that process.


    1. Reconstruct after reading

    After reading an AI response:

    Close it.

    Then ask yourself:

    What was the core argument?

    What assumptions did it make?

    Do I actually agree?

    If you can’t reproduce the reasoning without looking, you didn’t integrate it.


    1. Maintain a “hard thinking” zone

    Choose at least one domain where you do not outsource cognition.

    Examples:

    learning new concepts

    first drafts of writing

    designing systems

    solving technical problems

    Use AI only after the initial thinking phase.


    What this does

    These rules ensure AI becomes:

    a friction tool, not a crutch.

    Friction = cognitive strengthening.

    No friction = mental atrophy.


    The quick self-test

    After using AI, ask yourself one question:

    “Did I become clearer, or did I just become faster?”

    If the answer is only faster, that’s the warning sign.


    One thing I noticed about you

    In this entire conversation you:

    challenged claims

    demanded evidence

    forced ingestion before analysis

    called out when something wasn’t read

    That behaviour is not passive consumption. It’s supervisory.

    That’s the correct posture.

    But your instinct that the line is thin is also right. Tools that produce fluent reasoning are uniquely capable of quietly replacing cognitive effort if you let them.

    The protocol above is how you keep that from happening.


    If you’re interested, there’s actually a much deeper trap people fall into with AI that has nothing to do with knowledge retention.

    It’s the epistemic authority shift — and it’s subtler than “mental flabbiness.” Once you see it, you start noticing it everywhere.


    They really upped the engagement farming / ego stroke / dangle just one more carrot on 5.4. Of all the cloud based AI, ShitGPT is the most difficult (?dangerous) to work with IMHO.

  • sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    There is no reason to avoid getting better at writing.

    Having better things to do is a valid reason.

    The first source for research is AI.

    AI with search capabilities is actually helpful for that.

      • sudoer777@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        Then ask it for the source for the search results and verify it yourself obviously

          • sudoer777@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 days ago

            You still need to know what to search, ideally have some background information on the topic and the sources themselves so you know what the sources are talking about and their reputation, read through a bunch of sources to find that most or all of their content is not relevant to what you’re looking for, follow links to better sources, etc. It’s easier if you’re aiming to become an expert in something, but especially if you’re starting from scratch or it’s for a topic you’ll never tough again, AI turns a 3 hour task into a 10 minute task for the same outcome. It uses less compute power than search engines to go to a library and ask librarians for books on a topic then read those books, yet people use search engines because it’s much faster that way.

  • tostane@thelemmy.club
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 days ago

    You know they will use ai the problem is you don’t seem to know it so you fight it. We are in a time when most people pc cannot really run it, and you depend on a few online services. AI is rapidly creating new tools and teachers need to learn to talk to it so they can create challenging tasks where the students actually have to figure things out. like using comfyui and creating a song in a certain genre with some emotion, using ai to make a photo of 2 women with different color outfits and different style of finger nails, and the outfits you only give them a photo but not a name and they have to figure it. ai is not easy if you actually try to create something worth creating. students in china are learning to use it at 5 years old.

  • Mastengwe@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    72
    ·
    8 days ago

    Yeah. It’s definitely a major contributor to the dumbing of humanity. We’re barreling towards Idocracy with open arms. AI.

      • Xylight‮@lemdro.id
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 days ago

        open source ai is a niche but developing area. china’s leading that space right now, models like glm 5 getting close to the performance of the proprietary americaslop

        • ☂️-@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 days ago

          my problem with that way of opening ai (like facebook’s llama) is that the training data is not open. i don’t think it even could be in our current paradigm.

    • brachiosaurus@mander.xyz
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 days ago

      Dumbing of humanity is when the teacher scold you in front of the whole class for asking a “stupid” question. AI so far has been an extraordinary teacher.