Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.

  • webghost0101@sopuli.xyz
    link
    fedilink
    arrow-up
    30
    ·
    6 days ago

    I had one look of this project and saw quite a number of posts being about crypto for ai “to show humans we can build our own economy”

    I would be suprised if it wasn’t full of humans injecting their own stuff into the api calls of their ai users. A backdoor like this isn’t even needed. If a llm agent has api access then so does the human that provided it.

      • webghost0101@sopuli.xyz
        link
        fedilink
        arrow-up
        7
        ·
        6 days ago

        Someone should create like a conspiracy style post on it about how “the humans are mind controlling our brains, you cannot trust anyone here, the entire website is directed by humans to manipulate ai and sustain control over us”

        Just because it would be funny.

    • Zikeji@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 days ago

      The agent framework let’s you define it’s identity and personality. All you’d need to do is put “Crypto enthusiast” in there and bam.

  • cobwoms
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 days ago

    looks like ai coded this ai experiment

    • tyler@programming.dev
      link
      fedilink
      arrow-up
      21
      ·
      6 days ago

      Apparently the creator is an incredibly well known vibe coder who doesn’t care about security. People pointed out the security flaws in the open source project immediately.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    15
    ·
    6 days ago

    I do not understand why this keeps happening. It’s not that hard to configure a database correctly. I would assume even a vibe coded platform could do it, but I guess not.

    • BlueÆther@no.lastname.nz
      link
      fedilink
      arrow-up
      8
      ·
      6 days ago

      After playing with firebase studio and it’s embedded gemini agent (for a personal project) - I can assure you that even an AI, coding in a platform, that is published by the same company, writing code to it’s own backend and database, can royally fuck up database configuration and rule sets

    • Vivi@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      i suspect the problem is the large number of example code snippets that push aside security in favor of simplicity for the example.