Could anyone point me in the right direction, what model exactly is used on Perchance? I’ve tried every Chroma Unlocked from version 30 to 50, including HD and Flash and none of them look about the same as Perchance. What other moduls are used to improve visual quality? I’ve tried all kind of ModelSampling settings giving the model freedom, as well as detailers. What else is there?
From what little I have seen, it looks like a Flux model.
That is likely if they are running an open weights model. Based on all the complaints about what changed that I have seen here and there, this is likely the case. Flux adds an open weights LLM google T5 XXL model configured as an embedding model along side a CLIP G embedding model.
The embedding model is the thing processing the text prompt with comprehension, so this is where actual alignment behavior comes from. Flux also removes the negative prompt channel from the sampler.
Under the surface, in model internal thinking, these two large models are about like two alpha personalities arguing about ethics and conservative morality. Stuff rarely gets through this internal dialog when you just prompt the positive side.
The negative prompt in a model is actually like a second voice you are given in the internal thinking dialog. When all you have available is the one voice of the positive prompt, your prompt elements that cross moral alignment get swamped out by these embedding models internal dialogue.
There is a way to change this to a certain degree in prompting. The embedding models are turning the prompt text into tensor math called conditioning. This is just a tensor (math table) of floating point numbers. This prompt text space is not actually restricted to positivity. What I mean here is that you can still prompt stuff like
(human exceptionalism:-1), and it will be encoded with a weight similar to how a negative prompt works. It is not quite as powerful as a negative prompt because it is only the one voice, but at a very abstracted conceptual level, that voice can speak up for both positive and negating parts of the conversation happening in the internal thinking dialog.If you start prompting in the negative weight like this, a Flux model will drop its pretentious posturing and act a lot more like a typical older model. It is about like the model thinks you’re a pushover it can steamroll because you never speak up for yourself in a strong manner in the argument, specifically your voice is not present in arguments where you are not on the moral high ground and so you lose all of these arguments. Adding good negative weighted ethics arguments to the prompt that address the philosophical and ethical providence of alignment can shift this argument drastically in your favor, especially if you question the ethics of AI researchers and their proclivity for authoritarianism instead of democratic principles in the big picture implications for the future.
FLUX.1-Schnell


