I was testing DALL-E 2 to see if it would be subject to some common incorrect
assumptions about the sizes of things. For example if you asked people what size
a kiwi bird is, they tend to assume it's a smallish bird, maybe around the size
of a
Recently I've been experimenting with DALL-E 2, one of the models that uses CLIP
to generate images from my text descriptions. It was trained on internet text
and images, so there's a lot it can do, and a lot of ways it can remix the stuff
DALL-E (and other text-to-image generators) will often add text to their images
even when you don't ask for any. Ask for a picture of a Halifax Pier
[https://www.aiweirdness.com/the-terror-of-the-sea/] and it could end up covered
in messy writing, variously legible versions of "Halifax"
How would AI decorate an easter egg?
I've tried this before by training an image-generating model exclusively on
pictures of easter eggs I decorated (they came out plain, if a bit wobbly
[https://www.aiweirdness.com/nonexistent-easter-eggs-20-04-09/]).
I decided to see what I would get using a model
I remember when my hometown got one of these giant wooden playgrounds. It must
have been in the early 90s and a kid could get lost for hours in there.
Or injured or full of splinters or chromated copper arsenate I guess, which is
why you don't see
When I was a kid I looked forward to opening advent calendar doors in December,
although the pictures behind the doors were pretty forgettable. A bell. A
snowflake. If you were lucky, a squirrel.
So I thought I'd see if I can generate something a bit more interesting,
Here's "Ice Cream Planet Swirl", as generated by Pixray.
Full prompt: Ice Cream Planet Swirl #8bit #pixelart. Colors are chocolate, minty
green, and cream.Pixray uses CLIP, which OpenAI trained on a bunch of internet
photos and associated text. CLIP acts as a judge, telling Pixray
What do you get if you instruct an AI to turn a house into the most haunted
house in the world? What if you ask it for the LEAST haunted house? How does an
AI know what "haunted" looks like, anyways?
I did some experiments with CLIP+VQGAN
There's this baking competition I really like, and one of the elements in every
show is what they call the Technical Challenge.
In the Technical Challenge, Great British Bakeoff contestants have to bake
something they may never have seen before, based solely on a brief description
and a
Since OpenAI released CLIP, trained on internet pictures and their nearby text,
people have been using it to generate images. In all these methods - CLIP+Dall-E
[https://www.aiweirdness.com/the-drawings-of-dall-e/], CLIP+BigGAN
[https://www.aiweirdness.com/searching-for-bernie/], CLIP+FFT
[https://www.aiweirdness.com/lucid-deep-dreaming], CLIP+VQGAN
[https://www.