You must log in or # to comment.
That’s a pattern I see everywhere LLMs are being used: they spread.
- Scanning the input of the LLM for suspicious stuff? Use another LLM
- scanning the output of the LLM for compliance or nsfl content? Use another LLM
- if you’ve multiple specialized LLMs and need to decide which one to use? Another LLM makes the decision
- make sure tge activities of your agent aren’t malicious? Guess what: it’s another LLM
People who are much deeper into this tell me that the LLM checking for prompt injections isn’t itself vulnerable to prompt injections, but I remain unconvinced.
It’s
turtlesLLMs all the way down



