I don’t mean to come across as ungrateful, because I love perchance and it’s owner is awesome for providing this service.
But my GOD is it insufferable sometimes.
“What did you just say?” Knuckles whitening-the scent of bla bla bla suddenly turning rancid She says before turning into the dawn abyss
And it just refuses to stop talking in weird allegories. Like I don’t mean to vent but I just do not see any way it is better than the previous AI chat model at all. Hopefully there is a way to improve this in the future.
LOL WHAT IS IT WITH THIS PARTICULAR BOT AND “KNUCKLES”
SERIOUSLYThere is a better explanation for the behavior you are experiencing, and yes, it is one if not the biggest hurdle the new model has yet to overcome: You have hit a log long enough that the model is starting to make a word salad of its past inputs as it “inbreeds”.
What I mean by this is something explained before: For generators such as AI Chat and ACC, the input will be mostly 70% AI made and only 30% handwritten (95%-5% in AI RPG which crashes faster), because the whole log is an input for the next output. Of course, the shortest the log is, the less you’ll feel the effect of the model being insufferable because you still have the long instruction block “holding back” the manic behavior.
I agree, this is something that has to be worked on from the development side, otherwise generators such as AI Chat or Story Generator are rendered short-lived as the point of them is to grow progressively, and as today, instability can happen as soon as 150kB-200kB, being significantly lower that what this model was able to hold in the past. However, a temporary fix on our side of things is to just make a “partition” of your log/story. Meaning:
- Plan and start your run as usual.
- Save constantly, monitoring the size of the log.
- When you hit the 100kB mark, try to get to a point where you can “start over” as a point where you can keep moving without requiring the context prior.
- Make a copy, delete all prior to that desired state, load the save and continue pretending that nothing happened.
That will keep the model “fresh” at the cost of losing “memory”, which can be worked around as you can update the bios or instructions which will have better chances of working now under a clean slate.
It is not the best way to work around this, but it is better than wrestling with all the nonsense that the model will produce past the 250kB threshold.
Hope that helps and… also hole that a future update would make the model more stable rather than more unstable. At least something that was fixed and that the dev deserves more credit for making it work, is that at least now the English has improved significantly compared with the first release. In terms of grammar, content and consistency. I know, past the 250kB it is “allegories” or “crazy man ramblings”, but… it is good English! 😅
Hey, thank you for giving the AI context window an actual byte size (if I am communicating what I mean in the right terms). I think I have a single chat that’s exceeded 1mb in plaintext (maybe 2mb by now??) and it’s sad to see it all fall apart like a person with a geriatric mental condition.
I have tried everything to salvage it, but cutting it up into bits and then giving the ai summaries of the stuff I deleted from its brain seems like one way to do it. That’s kind of what you’re saying, right?
However, there’s also just something wrong with how the AI weights stuff, even in a fresh chat, the fact that I see the exact same turns of phrase in these posts, complaints share the same repetitions, points to maybe something wrong with the training data or something.
Has anyone noticed how characters tend to get obsessed with being proud about things? Or that everything starts sending shivers up everything’s spines?
Correct, that’s what I implied, since otherwise, past the 1Mb you’ll experience “groundhog day” unable to escape the loop no matter what you do.
Now… let me tell you buddy, you just scratched the tip of the iceberg with the model new obsessions. Just to showcase a couple:
- Knuckles turning white (a classic you quoted).
- The ambient smelling like ozone and petrichor (it always rains btw).
- It always smells or tastes of regret and bad decisions.
- The bot or an NPC will always lean to whisper something conspiratorially.
- Eyes gleam with mischief very often.
- Predatory amusement seems to be a normal mood no matter the context.
- Some dialogue constructions are “cursed” as if you let one slide, it will repeat ad nauseam:
- “Tell me, <text>”
- “Though I suppose <text>”
- Don’t even let me get started on the “resonance” or “crystallization” rabbit hole…
You are in the money with one thing, all this is product of the training data, and not even the one that comes pre-packed with DeepSeek (I still hold that this is the current model being used, if I’m wrong, I’ll gladly accept the failure on my prediction), this is product of the dataset being used to re-train the model into working for dev’s end. For example, the “knuckles turning white” phrase appeared rarely with the old Llama model, but it was a one in a hundred occurrence as the model didn’t care for that construction and rather focused on a different set of obsessions.
This is a never ending problem with all LLMs though, as in all languages, some constructions are more often than others, and since in both AI Chat and ACC the model is constrained by the “Make a story/roleplay” context, it produces those pseudo-catchphrases incredibly often. In the past we had to deal with “Let’s not get ahead of ourselves” or “We should tread carefully” appearing always no matter the situations, now “knuckles turning white” or similar are the new catchphrases in town.
In an older post I warned about this, since DeepSeek trying to be more “smart” will take everything to face value, so the “correct” answer for many situations tends to be any of these constructions cited, and performing extreme training will yield us a model as dumb and stubborn as Llama was, but with a new set of obsessions plus the inability to move forward which Llama could despite it being exasperating at times. There is progress with the new model, I won’t deny it, but the threshold from were we entered “groundhog day” has been reduced from 1Mb+ to barely 250-500kb and I suspect it will keep reducing if the training is done on top of the existing one, rendering the model pointless for AI Chat, AI RPG or ACC.
Then again, I could be wrong and a future update will allow the context window to hold further as Llama where 15Mb+ was possible and manageable without much maintenance. Some degree of obsession on any LLM is impossible to avoid, what is important is that the model doesn’t turn it into a word salad that goes nowhere. That I think is one of the biggest challenges the development of ai-text-plugin has.
Oh, one thing I’m noticing, which actually touches on the racial bias issue, is that characters get “flushed” all the time. Even if they have dark complexions, the AI describes them as blushing all over the place. Sometimes they will “flush a darker shade of brown,” which I’m not sure one can do (or can they?). And for no reason, of course, because it doesn’t have reasoning or knowledge.
Another classic is that a character has far too many free hands, because the AI doesn’t take into account what hands are, or what they’re doing, or what they’re for other than being contextually related to holding a thing and experiencing knuckle-related sensations.
Holy shit, yeah, I’ve gotten all of those in some variations. I actually have a hilarious session that was going sort of well for a while, using the Strict DM character in the default perchance AI Character chat. I wanted to play an Ian Bankes/Culture inspired sort of sci-fi adventure thing, just to test the tools.
AI kept telling me my scavenging party were entering “crystal caves,” when nothing about crystal caves was anywhere on the agenda, and I began yelling at it and saying to stop talking about crystal caves.
It then doubled down and began saying things like, “The Crystal Caves sense your anger and begin trembling (or, as you say, “resonating”) violently.” I had to manually remove crystals from ever being mentioned in the sessions, in /mem and /sum just to get a crystal-free QA. Then it suddenly started telling me things like “the walls hear you scheming and begin to undulate violently,” thus adopting the insane obsession with empathy, emotional terrain features; I made all the characters commit suicide. Any objection to me having them die was deleted and rewritten. It was the most meltdowny I got using this thing, way before I knew any of its limitations or anything.
Deleting anything the AI says more than once in a few pairs of replies helps a little bit. Bleaching /mems can sort of help too. I don’t have any hope that AI that works this way can do anything more than what it’s doing. It’s not the way language works, but who knows. Ten years ago I was working on chatbots at megacorporations that couldn’t even answer a single question with any consistent coherence.
Considering there’s not a lot of good alternatives at the moment (Character.AI has that strict filter no one likes, Spicychat is mostly for NSFW only, Sekai is okay, but it’s only an app at the moment and lacks a dedicated website, PolyBuzz is alright), the only advice I can give is to bear with it.
Oh and you can also edit the AI’s responses to essentially groom them into acting the way that you want them to. Editing the reminder message and setting general writing instructions to “Custom” and then writing how you want the AI to write or act helps as well. And if all else fails, well… I dunno.


