Invoke AI rolls out refined control features for image generation


Invoke AI released two new features to its AI-based image creation platform. According to the company, these new features — the Model Trainer and Control Layers — offer some of the most refined controls in image generation. Both tools allow users granular control how AI creates and modifies their images. Invoke also announced that it has achieved SOC 2 compliance, which indicates that the company has passed several tests indicating a high level of data security.

Invoke CEO Kent Keirsey spoke with GamesBeat about the platform’s new features and how they offer more control and refinement over an image. The bespoke Model Trainer allows a company to train custom image generation models with as few as twelve pieces of their own content. According to Keirsey, this offers more consistent images that are in-line with a developer’s IP, meaning that the AI can create art with the same style and design features more often.

“We’re helping the models understand what we mean when we use certain language,” said Keirsey. “When we get specific and say we want this specific interpretation, what that means is we need anywhere from 10-20 images of this idea, this style we want to train… We’re saying, ‘Here’s our studio’s style with different subjects.’ You might do that for a general art style. You might do it for a certain intellectual property.”

According to Invoke, one of its goals is to offer increased security, hence the SOC 2 compliance. With greater security comes a decreased risk that a developer’s images can be used to help create another studio’s intellectual property.

GB Event

Countdown to GamesBeat Summit

Secure your spot now and join us in LA for an unforgettable two days experience exploring the theme of resilience and adaptation. Register today to guarantee your seat!

Register Here

How to train your AI

The second feature that Keirsey demonstrated is Control Layers, which allows users to partition specific areas of an image and assign prompts to those areas. For example, a user can paint the upper corner of an image with the layer tool and give the AI a prompt to put a celestial body in that corner of the image specifically. It allows creators to adjust the composition of their image and control individual parts without changing the overall image.

Invoke AI’s Control Layers feature allows users refined control over their generated images.

The prompts attached to each layer can be refined and generated upon just like any other AI image. However, the effects are localized to the specified part of the image. Control Layers also allows users to upload images to specific layers, and the creator can choose what, specifically, they wish the AI to keep from the image — style, composition, color, etc.

On the subject of how Invoke’s new tools can be integrated into game development workflow, Keirsey said that most developers are conservative about the use of AI, in part for copyright reasons. “The human concept has to be there — a human sketch, a human initial idea. That will go to the point where you draw the line saying, ‘None of this is gonna go in the game yet. Until we can prove that we can get copyright, we’re not willing to risk it.’ The moment that you can get copyright, you’ll start to see that make its way into games… That’s why Invoke is trying to answer that for organizations, demonstrating human expression, giving them more ways to exhibit that, so that we can demonstrate copyright and accelerate that process.”

Leave a Comment