Smart AI use: A guide for game studios

This article was written by Futura Digital co-founder Alexandra Kurdyumova and associate Valeria Gziryan.
Using AI in game development can make work faster, boost creativity, and create unique experiences for players. But it also brings challenges, like legal issues, ethical questions, and quality control.
By following the outlined measures – such as ensuring ownership of AI-generated content, documenting the creation process, maintaining transparency with both teams and players, etc. – studios can use AI in a responsible way.
It’s important to remember that AI should help people, not replace them. Studios can start by testing AI in small projects, training their teams, and keeping up with new laws. By taking a careful and balanced approach, game developers can use AI to innovate while keeping quality high and staying ethical.
-
Pre-production phase
Check if you own the rights to AI-generated content under the AI tool’s Terms of Use.
If we take the majority of popular generative AI, it will be recorded in their Terms of Use that the results of the AI generation belong to users.
However, there are always exceptions, or special conditions may be provided. For example, the Midjourney rules state that all IP rights belong to users, but “if you upscale the images of others, these images remain owned by the original creators”. The line between an original and a derivative work is very thin, isn't it?
Verify the data sources used to train the AI (if possible) to ensure they are legal and ethical.
One key case that highlights the importance of checking rights to AI-generated content is the lawsuit against Microsoft, GitHub, and OpenAI over their use of code in Copilot.
In this case, developers filed a class-action lawsuit, claiming that Copilot generates code that may violate copyright because it was trained on public GitHub repositories without the explicit consent of the rights holders.
While the tech giants are more likely to win this case than lose it, It shows that even if content is created using AI, the original data owners (such as code, images, or text) can challenge its use if their rights were not respected.
Test AI-generated content for stereotypes or harmful content to avoid issues.
For instance, Stable Diffusion is an AI tool that creates images based on text prompts. While the images it generates might look realistic at first glance, they often distort reality. A study by Bloomberg analysed over 5,000 images made with Stable Diffusion and found that the tool exaggerates racial and gender biases, making them even worse than what exists in the real world.

Train your team on how to use AI tools effectively and responsibly. The workshops can focus on the following aspects:
- Technical skills – how to use AI tools like Unity Muse, Unreal Engine MetaHuman Animator, MidJourney for art generation, and ChatGPT for dialogue writing.
- Ethical principles – preventing bias, ensuring responsibility for AI-generated content, and, of course, discussing copyright issues.
Please always remember about three key components to get copyright protection:
- Independent creation (the work must not be copied, and "even though it closely resembles other works so long as the similarity is fortuitous, not the result of copying").
- Creativity (a small creative effort is enough, i.e., novelty is not required).
- Human involvement (It must be created by a person, not fully AI-generated).
Process optimisation – learn how to integrate AI into workflows to speed up development without compromising quality.
As you see, it is crucial to ensure that programmers, designers, and lawyers work on this together.
Besides, it is crucial to implement a clear and comprehensive internal policy that outlines dos and don'ts for employees regarding the use of AI tools. This policy should include:
- Prohibition on uploading unique code or sensitive company materials to AI platforms to prevent data leaks or IP risks.
- Guidelines on using AI-generated code or content, emphasising the need for review and modification before use.
- Restrictions on enabling AI training modes with company data to avoid unintentional sharing of proprietary information.
-
Production phase
Write detailed prompts when using AI to get the best possible results.
You can guide AI in the right direction, avoiding unwanted or random results. For example, this approach helps you prevent issues like parallel creativity (where AI generates content too similar to existing works).
Keep track of the creation process – save drafts, note dates, and document who worked on what using a digital system.
This is relevant both for creating an IP using AI and without it. In case of potential disputes, this approach will make it possible to prove that the IP was created by specific team members independently of anyone else and that their work has an original character.
Moreover, the following case is significant. In 2025, an image created entirely using AI obtained a copyright protection. The US Copyright Office initially rejected the claim but later granted it after the company argued that human creativity was involved in guiding the AI process.

Additional evidence was provided, like a timelapse video showing how the image was made and explained how a person was involved in the process.
The US Copyright Office then approved the copyright, stating that the image had enough human creativity in choosing, arranging, and organising the AI-generated content to qualify for copyright protection. This case also highlights the importance of carefully recording all stages of work on content.
Avoid sharing confidential information with AI tools to prevent it from being used for training.
Both ChatGPT and DeepSeek have faced incidents involving leaks of sensitive information, including user personal data and commercially critical details. It’s essential to learn from their challenges and ensure that prompts exclude any information whose disclosure could harm the gaming company or its audience.
-
Post-Production Phase
Always tweak the AI’s output, even if it’s just a little. The more you customise, the better.
Players noticed that in the new game High on Life, created by Justin Roiland (the maker of Rick and Morty), some wall decorations, like fake movie posters, were made using AI (Midjourney). If the developers had made even small changes to these images, it wouldn’t have been so obvious that AI was used. This would also reduce the risk of similar posters appearing in other games later.
Check the AI-generated content for copyright issues.Try to find similar existing works to avoid legal problems.
For example, it is possible to use tools to search for similar images (e.g., TinEye, Google Reverse Image Search) or texts (e.g., Copyscape).
Credit the use of AI in your game to be transparent.
In 2024, Valve introduced new rules for publishing AI-generated game content on Steam, officially allowing games created with AI under certain conditions.
Developers must now provide details in a new AI section of the content survey, explaining how AI is used in both development and gameplay. It is obvious that such platforms as Steam are expecting transparency from developers about how AI is used.
Don’t replace human work with AI without permission. Always get consent from artists, writers, or other contributors.
At the end of 2024, most actors for the Zombies crew in Black Ops 6 Zombies withdrew from the project due to insufficient AI protections in their contracts. They fear their voices could be recreated using AI without receiving royalties.
-
Long-Term Planning
Keep an eye on and collect input from your team and game users regarding AI usage.
A good example of this is when Ubisoft introduced their tool, Ghostwriter. Officially, it was created to help with background dialogue, NPC barks (like generic lines triggered by players), and UI descriptions.

At first glance, it seemed like a helpful idea. However, some people thought it was a sign of laziness and predicted that the quality of dialogue would get worse. Writers also saw it as a threat to their jobs.
Test AI in smaller projects first, not in big triple-A games, to see how it works.
But you need to act carefully. Two years ago, Microsoft faced criticism from gamers for using AI-generated artwork to promote indie games on its ID@Xbox Twitter account. The tweet, later deleted, featured winter-themed art with clear AI flaws like distorted faces. Gamers argued this showed a lack of support for indie developers and artists.
Protect player data when using AI to analyse behaviour or personalise experiences.
AI has transformed gaming, enhancing how players interact with games. For example, EA Sports uses AI in its FIFA series to customise gameplay. The AI adapts to player behaviour, making each match more engaging and challenging.
It is cool, but it is essential to remember that behaviour tracking can be qualified as personal data. Therefore, it is important to ensure that such processing and protection comply with the requirements of applicable privacy legislation.
Stay updated on laws about AI and be ready to adapt to new regulations.
In just a couple of years, laws regulating AI have been passed in the EU, South Korea, China, California, and Illinois. The global regulatory landscape for AI is evolving rapidly, and the games industry must adapt to this pace.
It’s crucial to have an experienced team that can quickly navigate these changes, identify what truly matters, and optimize processes. The Futura team is always ready to provide comprehensive support.