KB5077181 was released about a month ago as part of the February Patch Tuesday rollout. When the update first arrived, users reported a wide range of problems, including boot loops, login errors, and installation issues.

Microsoft has now acknowledged another problem linked to the same update. Some affected users see the message “C:\ is not accessible – Access denied” when trying to open the system drive.

  • AeonFelis@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 days ago

    You don’t need C:\. All your data should be in the 365 cloud anyway. Storing files locally in C:\ leads to antipatterns like not paying Microsoft for 365 access (a.k.a “Software Piracy”)

  • JensSpahnpasta@feddit.org
    link
    fedilink
    English
    arrow-up
    203
    ·
    4 days ago

    There must be something really seriously wrong at Microsoft. I can understand that Windows patches are complex and that they might break some of those crazy things people are running on their machines. But how is a bug that is killing access to the C:\ drive able to get through testing? WTF are they doing?

    • Lost_My_Mind@lemmy.world
      link
      fedilink
      English
      arrow-up
      169
      ·
      4 days ago

      It’s going to come out that there’s AI in the code. And the code testing was done by AI, who gave the buggy code the green light.

      • mybuttnolie@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        28
        ·
        4 days ago

        my boss loves AI and he uses it for everything. he made some stats graphs and summaries, and he was bragging how he got AI to make them errorless: he tells it to check for errors and makes it swear it’s accurate… while we were looking at a graph where the y column numbers were all fucked up

        • suicidaleggroll@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          4 days ago

          Interestingly, AI is actually pretty good at making graphs, the trick is you don’t ask it to actually make the graph itself. Instead you have to ask it to write a python script to create a graph using matplotlib from whatever source file contains the data, then run that script. Same with math. Don’t ask it to do math directly, instead ask it to write a bash or python script to do some math, then run that. Still not perfect, but your success rate increases by about 1000%

          • jaybone@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            ·
            3 days ago

            Because of so much open source and stack overflow it was trained on.

            But who writes bash scripts to do math?

            • suicidaleggroll@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 days ago

              But who writes bash scripts to do math?

              A full script? Nobody. But you can just run it interactively on the command line, which a lot of AI clients have access to. bc works great for basic math in the shell.

          • SaharaMaleikuhm@feddit.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 days ago

            That’s about 90% of what I use AI for right there: silly little bash and python scripts. A graph, some image compression, ffmpeg video shenanigans, the works.

        • UnspecificGravity@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          It’s really really bad at doing spreadsheet analysis. Even basic shit that I would give to an intern. At least an intern with generally just make shit up and pretend it’s not wrong even when I point it out, and if they do I get a new intern.

      • palordrolap@fedia.io
        link
        fedilink
        arrow-up
        31
        ·
        4 days ago

        And then the LLM says something like “You’re absolutely right, there was an error in that code that is clear and obvious now it has been pointed out and despite the fact you gave the instruction to make no errors. Is there anything else I can help with?”

        … and they’ll be too blind to take that as the warning it is and continue to ask even more of the LLM.

    • MonkderVierte@lemmy.zip
      link
      fedilink
      English
      arrow-up
      34
      ·
      4 days ago

      It’s Microslop. This is what’s wrong. Also, that they fired too much of the testing staff in favor of (user-)testing rings.

    • yucandu@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      4 days ago

      It’s not as bad as that time they permanently deleted user documents and photos.

      See they had this trick where if you didn’t have enough space on your drive to unpack an update, they’d just move your shit to OneDrive temporarily, then move it back when the update was done. Only they forgot to move it back, and lost it. Oops.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      4 days ago

      My company is starting to roll out having AI both put up PRs AND give code reviews.

      I would not be surprised to hear Microslop is doing the same thing and having horrible results.

      Amazing what happens when you try to turn your talent pool into lifeless casino monitors.

    • Rothe@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Vibecoding. Microslop has peddled AI so much that they have gotten addicted to their own supply.

    • evol@lemmy.today
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 days ago

      No one smart is going into windows dev in 2026. It’s like working on IBM mainframes. Only people left to work are middle of the road new grads they hire and boomers who are retiring.

  • Auth@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 days ago

    A lot of people didnt read the issue. This was an issue with the samsung connect app.

  • mkhopper@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    3 days ago

    Ugh… I’m so tired of “microslop” and “AI slop”.

    I’m not defending Microsoft in any way, but they were releasing buggy updates long before the rise of AI.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      3 days ago

      You know what’s going on inside the large companies that are hoping to cash in on the AI thing? All workers are being pushed to use AI and goals are set that targets x% of all code written be AI-generated.

      And AI agents are deceptively bad at what they do. They are like the djinn: they will grant the word of your request but not the spirit. Eg they love to use helper functions but won’t necessarily reuse helper functions instead of writing new copies each time it needs one.

      Here’s a test that will show that, with all the fancy advancements they’ve made, they are still just advanced text predictors: pick a task and have an AI start that task and then develop it over several prompts, test and debug it (debug via LLM still). Now ask the LLM to analyse the code it just generated. It will have a lot of notes.

      An entity using intelligence would use the same approach to write the code as it does to analyze it. Not so for an LLM, which is just predicting tokens with a giant context window. There is no thought pattern behind it, even when it predicts a “thinking process” before it can act. It just fits your prompt into the best fit out of all the public git depots it was trained on, from commit notes and diffs, bug reports and discussions, stack exchange exchanges, and the like, which I’d argue is all biased towards amateur and beginner programming rather than expert-level. Plus it includes other AI-generated code now.

      So yeah, MS did introduce bugs in the past, even some pretty big ones (it was my original reason for holding back on updates, at least until the enshitification really kicked in), but now they are pushing what is pretty much a subtle bug generator on the whole company so it’s going to get worse, but admitting it has fundamental problems will pop the AI bubble, so instead they keep trying to fix it with bandaids in the hopes that it’ll run out of problems before people decide to stop feeding it money (which still isn’t enough, but at least there is revenue).

      • ExperiencedWinter@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        Now ask the LLM to analyse the code it just generated. It will have a lot of notes.

        Not only will it have a lot of notes, every time you ask if to analyze the code it will find new notes. Real engineers are telling me this is a good code review tool but it can’t even find the same issues reliably. I don’t understand how adding a bunch of non-deterministic tooling is supposed to make my code better.

        • Buddahriffic@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Though on that note, I don’t think having an LLM review your code is useless, but if it’s code that you care about, read the response and think about it to see if you agree. Sometimes it has useful pointers, sometimes it is full of shit.

          • ExperiencedWinter@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            So when do I stop asking the LLM to take another look? If it finds a new issue on the second or third or fourth check am I supposed to just sit here and keep asking it to “pretty please take another look and don’t miss anything this time”?

            I’m not saying it’s a useless tool, it’s just not a replacement for a human code review at all.

            • Buddahriffic@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              Stop when you feel like it, just like any other verification method. You don’t really prove that there are no problems with software development, it’s more of a “try to think of any problem you can and do your best to make sure it doesn’t have any of those problems” plus “just run it a lot and fix any problems that come up”.

              An LLM is just another approach to finding potential problems. And it will eventually say everything looks good, though not because everything is good but because that happens in its training data and eventually that will become the best correlated tokens (assuming it doesn’t get stuck flipping between two or more sides of a debated issue).

          • JcbAzPx@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            That sounds worse than useless. It would be better to fail utterly than make up shit that you have to waste time parsing through.

            • Buddahriffic@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              It helps in the sense of once you’ve looked at code enough times, you can stop really seeing it. So many times I’ve debugged issues where I looked many times at an error that is obvious in hindsight but I just couldn’t see it before that. And that’s in cases where I knew there was an issue somewhere in the code.

              Or for optimization advice, if you have a good idea of how efficiency works, it’s usually not difficult to filter the ideas it gives you into “worthwhile”, “worth investigating”, “probably won’t help anything”, and “will make things worse”.

              It’s like a brainstorming buddy. And just like with your own ideas, you need to evaluate them or at least remember to test to see if it actually does work better than what was there before.

      • SoleInvictus
        link
        fedilink
        English
        arrow-up
        11
        ·
        3 days ago

        You’re spot on regarding how AI operates.

        AI is stupid story time!

        I recently helped a friend with a self-hosted VPN problem. He had been using a free trial of Gemini Pro to try to fix it himself but gave up after THREE HOURS. It never tried to help him diagnose the issue, but instead kept coming up with elaborate fixes with names that suggested they were known issues, like The MTU Traffic Jam, The Packet Collision Quandary, and, my favorite, The Alpine Ridge Controller Trap. Then it would run him through an equally elaborate “fix”. When that didn’t work, it would use the failure conditions to propose a new, very serious sounding pile of bullshit and the process would repeat.

        I fixed it in about fifteen minutes, most of that time spent undoing all the unnecessary static routing, port forwarding, and driver rollbacks it had him do. The solution? He had a typo in the port number in his peer config.

        I can’t deny that LLMs are full of useful knowledge. I read through its output and all of its suggestions absolutely would have quickly and efficiently fixed their accompanying issue, even the thunderbolt/pcie bridging issue, if the real problem had been any of them. They’re just garbage at applying that information.

        • Buddahriffic@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Yeah, they don’t do analysis but can fool people because they can regurgitate someone else’s analysis from their training data.

          If could just be matching a pattern like “I have a network problem with <symptoms>. Your issue is <problem> and you need to <solution>.” Where the problem and solution are related to each other but the problem isn’t related to the symptoms, because the correlation with “network problem” ends up being stronger than the correlation with the description of the symptoms.

          And that specific problem could likely be solved just by adding a description of that process to the training data. But there will be endless different versions of it that won’t be fixed by that bandaid.

    • Diurnambule@jlai.lu
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      I agree, but if microslop can be the downfall of microslop I will jump on the bang wagon. I think they should add more IA. Did they try live GenIA update of the user system yet ? Sound a money making idea.

    • stoly@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      It’s because they got rid of testing and quality control. They are only doing minimal testing now in controlled environments while the world is messy.

  • marighost@piefed.social
    link
    fedilink
    English
    arrow-up
    68
    ·
    4 days ago

    Microsoft believes the issue may be related to the Samsung Share application, although the exact cause has not yet been confirmed.

    30percentofcodewrittenbyai.jpeg

    • Cyberwolf@feddit.org
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      3 days ago

      It’s hilarious that the issues people think Linux has, like for example the disk deleting itself, are exactly what happens on Windows lol.

  • rodneylives@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 days ago

    There was a story going around back in September ago about the person whose wife used OneDrive on her phone. It had taken upon itself to copy 25+GB of data on the phone into OneDrive, despite only having the free account tier, and copying it to their Windows 11 PC. There it completely filled up its small SSD boot drive, putting it into a condition of extremely low disk space, which in made it impossible for Windows to boot. Here it is.

    • Auth@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I doubt that story. I’m a linux user at home but at work I admin windows and linux systems. I can see his logic because hes thinking how I would. But windows doesnt behave like that. On linux you can fill a drive and get issues booting but windows leaves space so that even when the user drive is full the system can still create temp files needed for operation. Whatever he did trying to get around the default behavior he misconfigured something

      • rodneylives@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I dunno? It sounds very plausible, exactly the kind of thing that Windows would do. I posted about it to Metafilter some time back and no one there seemed to think it couldn’t happen.

        • Auth@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 hours ago

          It sounds like user error to me. There is like 2 settings on onedrive and they couldnt even be bothered to configure it yet hes going through all this complicated troubleshooting.

          • rodneylives@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            If you can’t log into Windows you can’t change its OneDrive settings! What’s more, the user had no idea what was causing the problem, be it OneDrive or something else, until he did that troubleshooting! And, just setting up a new phone shouldn’t make your computer unbootable for any reason! Geez, way to victim-blame there.