Recently I've been experimenting with DALL-E 2, one of the models that uses CLIP
to generate images from my text descriptions. It was trained on internet text
and images, so there's a lot it can do, and a lot of ways it can remix the stuff
I’m previewing OpenAI’s new API, [https://beta.openai.com/] and like GPT-2, it
looked at a lot of internet text during training. In my last post
[https://aiweirdness.com/post/620645957819875328/this-is-the-openai-api-it-makes-spookily-good]
I showed how it can adapt to different prompts in part because of how much it’