AI has become a hot topic for many reasons, not all of them good. Essentially algorithmic art generators, these tools ‘train’ themselves on other art in order to then produce a mathematically complex and thus aesthetically pleasing piece of art as a result. However, the use of other artists' work, and worries that it will replace them in the creative industry has led to many artists decrying them.
However, in the game development space, AI upscaling has been in use for a much longer time. It’s been used for upscaling textures, or turning small samples into entire large-scale assets for use throughout a game. It’s an invaluable tool and in many ways bypasses the usual problems with AI. However, as the tools used for art develop further it’s not just textures or upscaling, but entire assets now being proposed to be created with AI.
In a recent article Game World Observer went over the example of developer Emmanuel De Maistre, who demonstrated how StableDiffusion, an AI art tool, could be used to create ‘Red-Alert style assets’. Red Alert of course being the famous Command & Conquer spin-off created by Westwood Studios. He shows how to create a variety of isometric buildings suitable for use as assets after training the AI on those from the original game, to create assets in that style.
Big problems, big solutions
There are, of course, concerns. AI takes what is a very human task and automates it near completely. It removes the ‘personal touch’ from what is in-fact an artistic process. It’s often difficult to parse the blurred line between business and art that is the creation of games, especially mobile games where margins are slim and developers need to think carefully about monetisation. The potential to streamline some part of the development process is especially tempting.
Aside from more ethical and philosophical concerns, AI also has the potential to see legal issues in future. Using assets without permission is dangerous, and although AI only ‘interprets’ these assets that it is ‘trained’ on, it still presents a novel legal problem - one which will surely be found by someone interested in solving it. AI art also raises the problem of creating ‘lookalike’ assets that are incredibly similar to others, reducing the distinct visual appearance of a game compared to competitors who may use the same AI or train theirs on the same assets.
You might compare it to automating coding. If you just copy-paste what you have from other examples, or use very similar systems, it reduces the authenticity and more importantly, the uniqueness of your game. AI art is already recognisable, so would AI assets be the same? Whilst it may make for quicker asset-creation, there’s also the potential players would notice and either not like it or simply see it as a marker of poor-quality, no matter the potential of the rest of the game.
What Emmanuel produced using the AI is impressive yes, but as shown it requires multiple passes to create something consistent. Not only that but fine-tuning takes time, perhaps better spent in communication with other elements of the production. Despite only being trained there is also a definite thread of the Red-Alert style throughout, recognisable, but perhaps too much so.
AI is of course not a tool simply for evil, as mentioned before it’s been used previously in the game industry to cover work that is usually tedious and time consuming. Freeing up artists and developers to perform more important tasks. But using it as a principle tool risks ruining consistency of assets, and indeed having work produced that is ‘close’ to what you want but not quite what will fit either.
Ultimately, it’s entirely feasible to use AI in asset creation. But it’s now a question of if developers should use it, not if they can. Two companies looking to use AI are Ubisoft and Riot Games, talking about training an AI to preemptively combat online harms