OpenAI’s new “multimodal” DALL·E and CLIP models combine text and images, and also mark the first time that the lab has presented two separate big pieces of work in conjunction.
In a short blog post, which I’ll quote almost in full throughout this story because it also neatly introduces both networks, OpenAI’s chief scientist
Ilya Sutskever explains why:
A long-term objective of artificial intelligence is to build “multimodal” neural networks—AI systems that learn about concepts in several modalities, primarily the textual and visual domains, in order to better understand the world. In our latest research announcements, we present two neural networks that bring us closer to this goal.
These two neural networks are DALL·E and CLIP. We’ll take a look at them one by one, starting with DALL·E.
The name DALL·E is a nod to Salvador Dalí, the surrealist artist known for
that painting of melting clocks, and to WALL·E, the Pixar science-fiction romance about a waste-cleaning robot. It’s a bit silly to name an energy-hungry image generation AI after a movie in which lazy humans have fled a polluted Earth to float around in space and do nothing but consume content and food, but given how well the portmanteau works and how cute the WALL·E robots are, I probably would’ve done the same. Anyway, beyond what’s in a name, here’s Sutskever’s introduction of what DALL·E actually does:
The first neural network, DALL·E, can successfully turn text into an appropriate image for a wide range of concepts expressible in natural language. DALL·E uses the same approach used for GPT-3, in this case applied to text–image pairs represented as sequences of “tokens” from a certain alphabet.
DALL·E builds on two previous OpenAI models, combining
GPT-3‘s capability to
perform different language tasks without finetuning with
Image GPT’s capability to generate coherent image completions and samples. As input it takes a single stream — first text tokens for the prompt sentence, then image tokens for the image — of up to 1280 tokens, and learns to predict the next token given the previous ones. Text tokens take the form of
byte-pair encodings of letters, and image tokens are patches from a 32 x 32 grid in the form of latent codes found using a variational autoencoder
similar to VGVAE. This relatively simple architecture, combined with a large,
carefully designed dataset, gives DALL·E the following laundry list of capabilities, each of which have interactive examples in
OpenAI’s blog post:
- Controlling attributes
- Drawing multiple objects
- Visualizing perspective and three-dimensionality
- Visualizing internal and external structure (like asking for a macro or x-ray view!)
- Inferring contextual details
- Combining unrelated concepts
- Zero-shot visual reasoning
- Geographic and temporal knowledge
A lot of people from the community have written about DALL·E or played around with its interactive examples. Some of my favorites include:
I think DALL·E is the more interesting of the two models, but let’s also take a quick look at CLIP.