If you’ve used AI to create an image, one thing is immediately noticeable: AI programs do not produce art in the same way that humans create it.
An artist drawing a flower might begin at the base, draw the stem, and then draw the petals and centers of the flower. Or, drawing a human figure, the artist might start at the head and work down to the feet.
In contrast, AI produces the entire image at once, beginning with a blurred image which it refines and sharpens until we see a clear picture.
Dreamlike Processes
If it’s not akin to an artist drawing or painting, it is akin to a person dreaming.
In dreams, things appear all at once.
Despite what some people claim, or think, AI is not conscious. But it mimics human consciousness, just as it mimics (through, for example, prompts) human art.
Tell an AI program to produce an image of a painting with brushstrokes, and it will do it, though it will not have produced any of the strokes with a brush, or with any physical gesture (the actual “stroke”) that could produce such a mark. In a sense, the program dreams the brush strokes into being. Or, rather, it mimics our dreaming of a painting with such strokes.
Dreams merge things together, as does AI.
If you saw AI’s earliest attempts at art (or image-making) you will have noticed that the end products were nightmarish and surreal. Things were merged together, often in ways that would be disturbing to the average person (though, apparently, not to AI).
Reflecting the modern bias to think that the rational is superior to other states of consciousness (such as inspiration and imagination), we typically think of AI as super-rational. But, currently, AI often introduces more errors than humans. And its improvements are, to a large degree, due to human beings refining the output themselves and, thus, “training” the models.
A Mirror of the Ancient Mind
AI gets things wrong, makes up citations. Social, Ethical and Legal Aspects of Generative AI, published by the academic publisher Springer came under fire after it was found to have numerous fake citations in it, many of them created by AI (so much for the ethical use of AI).
AI also merges things together that don’t go together.
Ask an AI model the same question twice, and you’ll get a different answer, because AI has an element of “probability” built into it.
From a certain perspective, AI is more akin to ancient human consciousness: Its output is often more like a dream, and the “probability” aspect of AI models make them more akin to ancient divinatory tools than to calculators or computers.
I have explored such ways of thinking and relating to the world through my books. But this is not a type of consciousness that most people understand.

