Discussing AI Artworks copyrights problems and AI Artwork’s values of culture

By Helium Lee

What happened?

In late 2022, a great number of AIs were released to the public. And, surprisingly, most of them have something to do with ARTWORK, which used to be considered a “cold point” of AI development, because people usually think that AI lack in “CREATING” things, while the invention of Stable Diffusion completely ends those thoughts.

While developing and using better models, problems also came up. The first one, also the most serious one, is COPYRIGHT.

Nearly at the end of 2022, everything seem to be just fine, and everyone was getting prepared for the coming 2023. However, just at that point, a protest silently took place in one of the world’s largest artwork-sharing platforms: ArtStation. Hundreds, even thousands of famous artists, show their anger directly through their artworks – by drawing images protesting the AI “NO AI”. But unlike what people thought when the Stable Diffusion or similar technologies are released, those artists were not protesting for the loss of their jobs, they were not afraid of the AI-created works at all, but they are protesting for their copyright – they didn’t consider AI as a replacement to themselves, but they were really angry with it because it used their artworks without any legally licensing programs.

So here comes the question. AI, till now, really can’t create something directly from the void. They need a set of data (dataset) to “train” themselves so that they can create “new” works. And all the questions are focused on how NEW those works created by AI are. Are they just a combination of hundreds of artists’ artworks? Maybe, even if the technologies’ holders don’t accept it, people still can see even watermarks in the generated image if they used some unpopular prompts.

If the AI’s work is not completely “NEW”, then it should be considered as a copy of hundreds of those “original” works and can be determined as an illegal copy, for its creators didn’t pay anything to those original creators. However, not only AIs but also human beings rely on learning from others to develop their own things. "People can't create anything out of the void, AI can't either”, some of the AI’s supporters may say. So, people are stuck on arguing whether Machine Learning (ML) is a kind of learning or not.

Governments around the world, which will finally decide everything about AI’s copyright-related conflicts, haven’t explained anything to the public. In fact, in most countries, there is no cases of copyright conflicts between machines and human beings till now. So neither the governments nor the judiciaries don’t have to or want to make a further explanation on the copyright problems.

As we said, there are no cases, and no explanations from professional agencies, the world of copyright will keep its silence for some time in 2023. But behind the silence, there is the coming danger. We have known what happened in 2022, in the days before 2023. However, in the days after 2023, even just this year, is there something waiting to surprise us? Will the silence become the end of the AI copyright conflicts, or the danger behind the silence will?

Just wait and see.


The largest problem in the discussion of the AI copyright conflicts is: Can we consider AI’s training progress as a kind of learning?

To figure it out, we need to learn how Machine Learning, the mostly used AI Technology everywhere in our daily life, actually works. Imagine there is a group of numbers, what is happening in your brain when you try to understand each of them? Through millions of experiments and tests, scientists prefer to consider that your brain has a “Neural Network” to process the input, decomposition the image, and finally determine the handwriting numbers to one result (output). And the Neural Network your brain used to determine which the number is, is formed by being trained with some data. You can’t even notice the training progress, just because you have already forget how you learn the each number’s meaning – for that is a work that is already done in everyone’s early ages.

To be simple, you can consider your brain forms a lot of messy Neural Networks before your birth. While each of them being trained to manage specific tasks, the “weight” of each “neural” changes – whats more, some of the neural connections would be disabled then destroyed. Maybe you have forgotten what your parents do while you are learning numbers: They give you many pictures, and said “Look, this is one, and that is two.” What they do is sending pairs of information and information’s labels to your knowledge-wanted brain. After training and destroying the networks, as an example, the network to determine the numbers is finally formed.

Nowdays, scientists think (Notice: This is only they suppose to be) that what your brain can do is actually relies which type of specified Neural Networks you have, and how strong it is. Our brain and its networks are powerful, and together, they created most miracles around the world. Moreover, the network-based is not only suitable for biological computing – brains, but also suitable for electronic computing – computers. By designing programs using Neural Network’s styles, which stands for connecting different actions of compute to a fixed network, and get every neurals tagged with a specific weight, computer scientists succeed to create many types of Computer Neural Network,. Later, programs with artificial intelligence released, they can do things more wisely, thanks to the help of those dear networks.

The content above is only a brief introduction to the ML itself. In real progress of making AI and Neural Networks, there are more to learn, such as how to train the network and adjust its weights, how to adjust the network structure in order to decrease the computing resources need.

But isn’t that enough? The real question behind the arguments of learning is the question of the strength of the Neural Network itself, the so called algorithm. Both machines and humans can do handwriting numbers analyzing now. But if we took a close look at the actual training progress, you will find the difference. To humans, learning how to determine which the number is means to see “some” pictures and their related “data labels” (parents telling their children which the number is). But to machines, this stands for learning “huge amount of” pictures to increase the success ratio in determining numbers. And obviously, training a model to determine pictures will took a lot of power from outside world, mostly through electricity – but humans only need three meals a day to satisfy all their daily energy requirements. In this way, people will easily find the human’s networks have such a powerful algorithm that only requires a little dataset and can ensure a unbelievable high success ratio.

Computers require more energy and more data for train than human. And this difference of algorithm’s progressivity between humans and machines is too big, that really give us another question: Do human brains actually works in the same way as the computers do? If they do, what makes them so powerful, a super-optimized algorithm provided by our nature as a present to humanity?

That question lead to the unbelief of the neural network theory. NN theory points out the human brains rely on networks to work, but that is not verified at all, because nobody knows how brains actually work till now. Brains should not just rely on Neural Networks to work.

If brains do not rely on Neural Networks only, then all the questions is solved. That means brain, is a present of biology, not compute science, and shouldn’t be considered as a object which can be replaced by modern or even future computers. Brains and computers are not equal, they are different things work in like but different ways, so Human Learning and Machine Learning are different things as well.

If ML’s products are not produced in the same way of Human Learning, then it shouldn’t be allowed to be shared or even used for commercial purposes, because they can are actually a kind of combinations of hundreds of artworks that are already existed.


AI Artworks are combinations of human’s artworks, sounds great! So should be took action right away to stop all the artworks from AI spreading through internet, and revoke every AI companies’ licenses? The answer is no, not at all, and that is also impossible to do. AI Artworks “can” also be of a great value.

When we talk about the value of an artwork, what are we talking about? The price? Maybe. The material? Possible. But when we become serious, there is only one acceptable question: what makes artworks art is the thoughts. It is the creator’s thoughts. In Monna Lisa, the most valuable thing is the freedom, the liberty of the female, and their happiness, but not the paint and the tool Da Vinci used. Same in the AI.

Stable Diffusion has a distribution called “Novel AI”, it creates Japanese anime styled pictures. To be honest, thanks to the special style of Japanese anime, this distribution performs way more better than original Stable Diffusion tool. Looking at a picture generated by it, what do you think? Can you feel anything about the creator’s thoughts? Maybe you will say NO, and you are right, because it is generated, not created, and nobody, nobody filled the works with any type of thoughts or specific values. However, you could also say YES to the question. Because the thoughts of the work is not always under the control of its original creators. People may have their own opinions.

Yes. AI Artworks can be valuable. But values are given by people, not machines at all. AI Artworks can show their value in the way of helping early writers create their stories more lively – human writes script and choose an opinion to show in their work, while machines to those heavy and repeating works like drawing, coloring.


Now, it comes to the part to deal with money. Artists are fighting for their copyright licensing fee. And their fight is reasonable. Yes, AI companies need to do something for the workers to help them look after their copyrights. Novel AI scraps thousands of pictures from the Pixiv, and it doesn’t even send a notice to those creators whose artworks were chosen for train, and almost don’t spend a cent on licensing the dataset.

It’s no doubt that the companies’ actions are evil. But this is capitalism, and it is evil at all. Artists should keep focus on those companies, not the AI itself. AI is a tool, just like brush and papers, but not a “evil copyright killer”.

But some groups of people who support AI are not evil, but of a great kind. They are the hackers, the hackers who leaked the Novel AI’s models and push the problem of the copyright to the public’s sight. Without them, people will know nothing about the growth of AI. They finally make AI better, and also give everyone a chance to try AI.

So, AI’s artworks should be licensed differently for different purposes. When for personal use or to support early writers or liked early creators, they should be free to use. But when for big companies to make money, or when they are sold to others, the users should pay a license fee, and the license fee should be sent to both the original creator and the company train the model.

Maybe in the future, there will be jobs to draw pictures just to train AI. But now, let’s just sit and relax, and see what will happen later.

Helium Lee on January 12nd, 2023

©️ 2017-2023 Helim Lee, running on HeliNetTM 4Charges(PPNN) "LocalHost" Server.