I recently started playing with DALL-E 2, which will attempt to generate an
image to go with whatever text prompt you give it. Like its predecessor DALL-E,
it uses CLIP, which OpenAI trained on a huge collection of internet images and
nearby text. I've experimented with a few
