unfounded thought on generative AI and subliminal messaging
09/03/2025My uncomfortablity with generative AI output is decently correlated with the "dimensionality" of the output type.
i.e. I'm mostly fine with using generative AI for things such as LaTeX generation, html formatting, etc. Using it for text generation is slightly unnerving, and for images much more so.
Am I just very susceptible to the uncanny valley effect? Or maybe is it easier to "completely understand" low-dimensional output?
Or maybe is dimensionality correlated with original thought, and the real reason AI output makes me instinctually uncomfortable is because of its lack of original thought?
As of today, many AI language models have a bias to reach a spiritual attractor state when you refeed its text output back into itself, which is decent evidence of subliminal messaging being possible through text, at least. With image generators, refeeding images is known to gradually exaggerate certain features and become gradually more yellow over time. However, low dimensionality output, there is no real way for messages and influences like this to take hold, as the reader is able to perceive all of it completely.
So maybe the best practice of "mental hygiene" when dealing with AI output is to treat any high-dimensional output as a cognitohazard and try to avoid being exposed to it excessively. Low-dimensional output should likely not have this hypothetical effect, maybe?.
I'd very like to hear people's thoughts on this, email me at the email posted on the homepage of this site