Amazon can generate images of hypothetical products based on what you search

March 2nd
Two approaches to text-to-image generation. The top is the traditional method, which is unable to maintain the color of the product across queries. The bottom is Amazon's new ReStGAN model. (Amazon)

Amazon researchers have published an AI that can create hypothetical product images for users as they search.

"The idea is that a shopper could use a visual guide," researchers Arijit Biswas and Shiv Surya write, "... until it reliably retrieved the product for which she or he was looking."

More from their blog post:

So, for instance, a shopper could search on “women’s black pants”, then add the word “petite”, then the word “capri”, and with each new word, the images on-screen would adjust accordingly. The ability to retain old visual features while adding new ones is one of the novelties of our system. The other is a color model that yields images whose colors better match the textual inputs.

Researchers call their model ReStGAN, or a Recurrent Stack Generative Adversarial Network. Biswas and Surya claim that ReStGAN outperforms similar existing models because it's built on a recurrent network architecture, which allows a neural network to carry sequential information across processes, a feature that's roughly analoguous to memory.

While the model is unreleased and appears only limited to clothing searches, it could be a significant improvement over other generative image tools, which often look more like surrealist art than convincing images.