Building on Michael Sedlmair’s point

For the other side of Michael’s coin, where are we ‘just’ creating visual versions of biases already known from the cognitive literature, and where might we find a truly novel interaction between existing biases and vision? Is there something about the architecture of the visual system, like its power to process massive amounts of presented information at once, that qualitatively eliminates, or introduces, biases?