The recent boom in the use of generative AI in the writing and visual fields presents a host of opportunities and the same plus interest in challenges. ChatGPT, Bard, Dall-E, Sora, and a long list of others have burst onto the scene at alarming rates. They are powerful tools, but deeply flawed, and pose significant risks to users and artistic communities. They can be used to generate large amounts of text, images, video, and audio, that on the surface appear to be akin to skilled human creations.

And they are only getting better. Their capacity for good must be weighed against their capacity for harm, as these models facilitate the spread of deep fakes and misinformation, amongst other ills. But if these tools are to be accepted by both the artistic community, and society more broadly, at least three things must happen.
AI must be used to augment rather than replace human labour
A fundamental concern with the introduction of generative AI tools is the mass loss of jobs that might result. Technological change is, seemingly, inevitable, and throughout human history tasks that required human (and animal) labour have been replaced by increasingly sophisticated machinery. This has had the effect of both eliminating or deskilling jobs and increasing productivity by augmenting skilled labour.
To ease any transition into new ways of working, change must be slow. Augmentation must be prioritised over replacement, and, if replacement is inevitable (there are reports that large amounts of paralegal work could be outsourced to AI, for example), this must happen slowly, and appropriate safety nets and meaningful retraining must be in place. If productivity gains are used to reduce the inputs to production, non-human inputs must be limited before human labour is limited.
Importantly—and this is a principle that should apply to any form of automation—the gains must go to the workers and labourers, rather than the capitalists. For far too long, productivity gains have made the wealthiest in our societies even wealthier, at the expense of the poorest and middle classes. For people to feel like they have a meaningful stake in society, and for them to accept the expected level of change, they have to stand to benefit.
Working with AI must be regarded as a specialism
As well as economic considerations, a cultural shift must take place. Where artists use AI in their artwork, this must still be seen as an exercise of their talent. It takes some skill to actually use AI tools to proper effect. Prompts require engineering with precision, outputs require altering in a painstaking iterative process. Often, outputs require manual editing or remastering. People will have to come to accept this as a skill in its own right, rather than see it as a form of cheating.
At the same time, knowing what goes into generating an image or a piece of writing in this way, we must also adapt our expectations. We should be increasingly critical of the outputs, holding artists and writers to higher standards. The spread of word processing software with in-built error detection has made us less accepting of spelling and grammatical errors—the same must be true of these new forms of AI. We must also be determined in our questioning: where is the artistry? What effort has gone into this? A fundamental essence of art is the sweat of the labour, the passion; we need to see how the artist has used their knowledge, skill, and experience to modify the output and ensure that it is fit for purpose—that it is fit for the story that they intend to tell.
People whose work or data is used in training models should be compensated
This is one of the most important barriers to acceptance. Given that many of the models available have been trained on artwork and creative works without the artists’ consent, it is regarded as immoral to then used said models to create artworks that replace the efforts of those very artists. The foundation of the image-based generative AI models is the artwork of innumerable creatives that are accessible on the internet; of text-based models, thousands of books that were scraped from eBook hosting sites. These works were offered for free, but, in violation of a fundamental principle of open access works, they were offered gratuit, but not libre. They could be accessed without cost, but that did not mean they could be used for any purpose. The people whose work this is have been wronged.
There is no easy remedy to this. But as a start, big tech firms need to licence peoples’ artwork and creative outputs, and uses must be made explicit in data collection. The artists should be free to name their price, or have their work retracted from the model. Though the outcome of the numerous court cases are far away, every thinking person who has looked upon this topic with a sympathetic eye should see that some form of theft has occurred. If it is too late to licence each and every work or piece of data that has gone into training the models—and if keeping the models rather than eradicating them is deemed a lesser of two evils—then the public deserves a share of the profits or some form of public ownership. Such profits should be earmarked for creative pursuits and education.
These firms have taken from the creative space without consent to create tools that directly harm the people they stole from. If these tools are indeed here to stay, they must be prepared to give back to the community that they have wronged.
Leave a comment