An explanation of the extremely primitive state of AI art

Antsstyle
7 min readOct 10, 2022

As AI art has been causing no end of trouble for artists lately, I felt the need to write an article that explains just how primitive the technology actually is.

Introduction

First of all, we need to make a very important distinction between two terms: AI and Machine Learning (ML), which are frequently used interchangeably or conflated.

For people who aren’t technical experts, the term “AI” sounds fancy and complex and intelligent. However, there isn’t any real “AI” yet in any field — there is only machine learning thus far in practical terms. What is the difference?

An AI, if created, would be capable of thinking for itself, without external input — it would learn without having to be explicitly trained by a human. Machine learning, on the other hand, is a much more primitive concept in theory. It is a form of adaptive algorithm, and is very much similar to normal software in many ways.

Understanding machine learning

To explain what machine learning is, let’s think about a real-world example of something you could learn to do: throwing a basketball into a net. Each time you throw the basketball, you observe whether you missed the net, whether you threw the ball with too much force or too little, and adjust your aim and force to try and do better next time.

A machine learning method of doing this would be to have a code function like this:

Example pseudocode.

The premise of this algorithm is simple: it tries to throw the ball in the net, given an initial value by a human to determine how much force it should use, and what angle it should throw the ball at. It then records where the ball went, and how far away from the net it landed. A completely normal algorithm then determines how to adjust the force and angle for the next throw, based on where the ball landed.

This is the premise of basic machine learning: an algorithm that adjusts its own parameters by seeing the results. Even here, however, a human is required to write the adjustForceAndDistance function. The code has no idea what a net is, or a basketball; it only knows what it has been told. The human, by coding the adjustForceAndDistance function, tells the algorithm what its goal is: to try and get closer to landing the ball in the net. That code might look like this:

Example pseudocode.

The algorithm has no understanding of why it needs to do this, or why throwing the ball lower if it went too high will help it reach the net, or why it should want to reach the net at all. It only knows that the human has told it so by instructing it to act in this manner.

Machine learning, in a nutshell, is this same approach done at very large scale. A computer can run this algorithm billions of times very quickly, fast discovering the optimal force and distance required to land the basketball in the net.

In a real life scenario, this wouldn’t work, as we didn’t account for many factors here (friction of the ball on the hands, air resistance, outside wind, etc). Indeed, machine learning doesn’t work in most real-life scenarios you might think of, because there are simply too many factors to account for, making it implausible for the computer to be accurate.

This doesn’t mean machine learning is useless: it is very helpful in fields where variables are limited or extremely predictable. Medicine and other fields have benefitted greatly from it.

Art, however, is a perfect example of a field where machine learning is completely useless: there is no way for an algorithm to understand it well enough.

What do current Art ‘AIs’ do?

The premise of current art “AI” programs is very similar to all the previous ones. They are given a huge number of images, and they analyse these images for prominent features or attributes. For example, an advanced machine learning algorithm could discern that art of horses are often accompanied by humans riding them.

Image recognition, in and of itself, is an extremely complex field that is very difficult to do. I have experimented with it myself in the past, attempting to use SIFT algorithms and other methods to detect similarities in pictures posted by artists to improve my tweeting tools, but it didn’t work as the error rate was gigantic. The reason is simple: images are very complex.

Facial recognition, for example, has stumped ‘AI’ for a long time. It has invariably been found to be highly flawed, and biased against ethnic minorities due to disparities in the data it is given. All this, in an image recognition task that ought to be simple — since they only had to recognise faces, in images that had nothing but one face. Imagine the complexity, then, of trying to recognise faces in the middle of scenery, different weather, different lighting and myriad other variables.

When it comes to ‘AI art’, the machine learning algorithms they use are pretty advanced: they are capable of recognising features within images and linking them to other features within those images, and discerning similar traits between different images. However, it doesn’t know how to discern correct traits from incorrect ones: it still has to be trained continually by a human. Otherwise, if it encounters a drawing of a person riding a horse whose head is not visible and blurs into another object in the background, the AI might get the idea that horses are generally ridden by headless humans. The human must give the machine learning algorithm feedback so that it knows which traits to discard and which it should take note of.

There is one giant, fatal flaw with this approach: it cannot create.

Take facial recognition as an example. An AI designed to recognise faces doesn’t need to make them — only recognise them. It does not need to understand how a face works, or why, or in what manner — only to recognise one. Here’s a simple way to understand how much easier a task this is: conjure up, in your mind, an image of a friend or someone you know. You know what they look like — if you see them, you will recognise them easily. Does that mean you would be able to draw them? Absolutely not, unless you happen to be a highly skilled artist.

Art “AIs” only copy attributes they have seen in source images. They do not understand why those attributes are there or how they function. This is an extremely important point.

The downfall of art AIs: source images

Suppose you want to draw a human standing in the middle of an empty plain. This seems like a simple concept, except it is actually wildly complex.

In order to do this accurately and draw the human at any angle or in any standing pose, you must understand human anatomy, how muscles react when stretched, how ligaments are connected, how clothing folds and stretches, how skin interacts with movement, the maximum rotation of specific joints… the list goes on.

In addition, you must understand how rain or sunshine will affect the lighting, the physics of water making things shiny due to interfering with the path of light, the surface of whatever the plain is made of becoming wet — you could list the necessary factors for hours.

An AI analysing source images does not understand any of this. It can only learn what other artists have done in limited settings — and if the artist did it wrong, the AI will do it wrong too. If an artist drew a woman with large spherical breasts because they didn’t understand gravity or anatomy, the AI is going to do the same, because it doesn’t understand gravity or anatomy either.

This, then, would beg the question: why do art AIs use source images if it’s a fatally flawed approach?

It’s not possible to actually teach an AI to draw

The answer is simple, and I have been alluding to it above. In order to teach an AI to draw, or to become intelligent in any capacity — it would have to understand the fundamentals of whatever it is doing.

Drawing humans requires understanding anatomy, physics, and many other topics. Drawing mechanical objects requires understanding the physics of mechanical locomotion, understanding the internal design of mechanical objects, and much more. Drawing landscapes requires knowledge of physics, weather, environmental factors like moisture in soil, specifics of plants and trees, and a whole bunch of other things.

This data is not something you can give to an AI. Attempting to teach an AI the laws of physics would be an extremely daunting task, to say the least — and even then, you would have to teach it human anatomy, the mechanics of which frequently confound even master artists and medical professionals.

You would also have to teach an AI context, which is a massively problematic and complex area even for humans, never mind AIs. I wrote more on that subject here.

Conclusion

If art AIs want to be known as anything more than automatic plagiarism tools, they must be taught to actually understand how to create art, not merely to copy attributes that exist in the works of human artists.

In other fields, this doesn’t matter. Nobody cares if a machine learning algorithm, with the help of existing human work, finds a new drug; in these situations, it’s just helping the human to work faster. In art, the AI does nothing but rip off the work of others — it contributes nothing and creates nothing of value or original merit.

--

--