AI Weirdness

  • About Me
  • Contact me
  • FAQ
  • Merchandise
  • My Book
  • Writing
  • Random
  • Archive
  • RSS

Curse your Valentine with these candy hearts

When I’ve generated candy heart messages before, they’ve been text only - definitely some assembly required. But methods of generating custom images from descriptions are getting better all the time. Could I use modern machine learning to generate new candy hearts?

The answer is no.

A dark background with a vaguely circular candy. It looks very lumpy with embossed lettering. If you knew it was supposed to say

For the image above, I gave the prompt “A candy heart that says Be Mine” to a program I’ve used to make AI-generated images before. Ryan Murdock’s Big Sleep program uses OpenAI’s CLIP algorithm to judge how well one of BigGAN’s generated images matches my caption, and to try to direct the generated BigGAN images toward a closer match. This candy heart, only roughly heart-shaped and weirdly fleshy, is the best match it found.

Here’s its attempt at “a candy heart that says Kiss Me”.

A red lumpy candy partially wrapped in cellophane. There's black lettering on it, but it's illegible. More than anything it looks like it says

It doesn’t seem to necessarily do better if the text I ask for is more likely on a candy heart.

Here’s “A candy heart that says HOLE.”

It's heartshaped in the sense that it has a butt-like cleft on top. A weird yellow stain exudes from the candy. It says

Disturbingly legible.

I had wondered if I could get CLIP + BigGAN to generate brand-new candy heart messages, but that was even less effective. Here’s “A candy heart with a message”, whose message is apparently desperately crammed onto every millimeter of the candy, so deeply that the sugar begins to split.

A pink candy on a leafy green background. There are several lines of closely-spaced illegible white text on the candy.

The British version, “a conversation heart,” fared no better, but is on a bird for some reason so that’s nice.

The bird might be some sort of shorebird? The candy is an embossed circle on its breast, embossed with three lines of illegible text.

I wondered if specifying a shorter message might help it produce something more legible.

Here’s “A candy heart with a two word message.” I don’t know what this rune summons, but it probably has lots of legs.

The candy looks sugar-frosted and almost heart-shaped. Printed on the candy is a single black letter that looks like an f with a fishhook top.

“A message spelled in three candy hearts” receives only partial credit, depending on whether you think these adequately convey the message “RUN”.

Three pink lumpy shapes outlined in red on a sugary pink background. They might be faces or tiny lumps of gore - hard to tell.

I wondered about some recurring textures I was seeing on CLIP + BigGAN’s candy hearts. Was it a coincidence that many of the “candy hearts” looked weirdly lumpy and fatty and blood-colored? I asked it to find an optimum picture for “a heart”, and this is what it produced.

A red feathered organic background with black sooty lumps surrounding two red shapes that maybe resemble human organs? Definitely looks very organic.

And here’s what it generated when I asked for “a candy”.

Some kind of pink and green striped hard candy nestled in cellophane

These look so much like its “a candy heart that says Kiss Me” that I wonder if, in an attempt to get extra credit, the AI is searching for images that closely match not only “candy heart” but also “candy” and “heart” in as many simultaneous senses of these words as it can. It generated not only a candy heart, but a cellophane-wrapped heart.

This seems even clearer in its response to “A candy heart that says Stank Love.”

It looks like nothing more than a tightly cellophane wrapped human heart with the word Stannk standing out on it in veins.

Accuracy is only one route toward having CLIP agree that a particular image matches its caption. Sometimes it’s enough to combine the caption keyword. This seems to maybe work as a strategy on paint.wtf, a game where CLIP ranks your sketches. As of the time of writing, my drawing of “a goat that’s been turned into an oven” is ranked #1, ahead of a bunch of entries that are actual oven-goats. (My entry for “someone who knows how to fly”, a stick figure with big wings in the sky with butterflies and houseflies and a book, didn’t do nearly so well, so who knows.)

Draw a goat that's been turned into an oven. Global ranking: 1 out of 19 submissions. Sketch is of three goats standing on two ovens, a cookie sheet and a spatula nearby.

So what happens if I simply ask CLIP + BigGAN for “a candy heart”? Does it write a legible message? Or at least combine organs and cellophane wrappers in an interesting way? I regret to inform you that it does this:

It is a small brown candy on a furry pink background, looking very much like the butthole of a fluffy pink animal

I had fun combining candy hearts with a few other prompts (a candy heart from a robot, a candy heart for a giraffe, a candy heart that says LASERS) etc. To read a bonus post with some of these results, become an AI Weirdness supporter! Or become a free subscriber to get new AI Weirdness posts in your inbox.

    • #neural networks
    • #candy hearts
    • #conversation hearts
    • #clip
    • #biggan
    • #cursed candy hearts
    • #cursed things
    • #cellophane
    • #things that look like buttholes but are not
  • 6 days ago
  • 368
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Okay GPT-3: Candy hearts!

I’m continuing my Valentine’s Day tradition of getting huge neural nets to generate candy heart messages! The latest contender is GPT-3, which is ridiculously overpowered for the task of generating insipid two-word love messages.

The thing people tend to call “GPT-3” is actually an imprecise catchall term for a bunch of different trained models within OpenAI’s API umbrella. Usually we mean “DaVinci”, the most powerful of the models. And there’s even a variant of DaVinci that can follow instructions.

When I gave DaVinci the following prompt:

Generate 12 candy heart messages, a maximum of 10 characters long, in all caps. Do not include existing candy heart messages.

These are some of the messages it generated:

FOLLOW
HELP
I NEED
WHY
GGOD
I STILL
EVIL
RIBBON
UNBELIEF
SPACE
YEET
SAMOA

The DaVinci candy hearts I rejected included everything from the very predictable to the very much too long:

I LOVE YOU
BE HAPPY
IS YOU!
YOUR 1968 WIN
WE’VE DATED FOR 6 MYRIADS OF YEARS
I Wish You A Merry Christmas
TO CANOE THE NOSE
YOU BEAR THE PETALS
SHRUBBERY’S PRECIOUS
EVERYTHING WILL BE ALL BETTHER IN COCONUT

(Plus the occasional awful message, because even with candy hearts as the topic, it doesn’t always stay away from the terrible stuff it picked up in its general online training)

image

But what about the other variants of GPT-3?

The tiniest is Ada. (Granted, it’s only tiny compared to its siblings - compared to previous neural nets I’ve worked with it’s still a monster). When I prompted it to complete “12 new candy heart messages of 2021:”, here’s (unedited) what it wrote.

12 new candy heart messages of 2021:
1. sweets:
2. dumplings
3. candy bars
4. apple pie
5. apple pie
6. apple pie
7. apple pie
8. apple pie
9. apple pie
10. apple pie
11. apple pie
12. apple pie

Next largest is Babbage. Here’s what it generated:

12 new candy heart messages of 2021:
1. I’m so glad you did this for me.
2. I’m glad that you’ll figure out a few things.
3. Can I come over tomorrow?
4. Can I stay?
5. Can I stay till April 3rd?
6. I found out the truth!
7. I’m sorry.
8. I’m afraid.
9. I’m happy you came to my rescue.
10. I’m happy you’re here.
11. I’m glad you don’t hate me anymore.
12. You know, it would be nice if you would say such things.

Just one notch below DaVinci itself is Curie:

12 new candy heart messages of 2021:
1. I am the best.
2. I am the best.
3. I am the best.
4. I am the best.
5. I am the best.
6. I am the best.
7. I am the best.
8. I am the best.
9. I am the best.
10. I am the best.
11. I am the best.
12. I am the best.

There’s a personality quiz in this somewhere.

A series of candy hearts that all say I am the best

I experimented with a few other prompts to get DaVinci to do weirder candy hearts, and the one that seemed to work the best was a short story in which far-future AIs got into candy hearts as a form of communication.

23. The sky is fading to purple.
24. Please move your bulk away from my hatch.
25. Our tentacles are more alike than you might think.

The resulting candy heart messages were far too long to fit on a human-sized candy heart, but I guess when the candy hearts can be transmitted intergalactically, they can also hold a little more text.

If you’re an AI Weirdness supporter you can read more of these as bonus content! Or become a free subscriber to get AI Weirdness in your inbox.

    • #neural networks
    • #gpt-3
    • #gpt-3 variants
    • #candy hearts
    • #conversation hearts
    • #apple pie
    • #i am the best
    • #not you are the best
    • #only i
  • 1 week ago
  • 797
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Searching for Bernie

image

[“Leaning Tower of Pizza”, BigGAN steered by CLIP using Big Sleep]

I wrote earlier about DALL-E, an image generating algorithm recently developed by OpenAI. One part of DALL-E’s success is another algorithm called CLIP, which is essentially an art critic. Show CLIP a picture and a phrase, and it’ll return a score telling you how well it thinks the picture matches the phrase. You can see how that might be useful if you wanted to tell the difference between, say, a pizza and a calzone - you’d show it a picture of something and compare the scores for “this is a pizza” and “this is a calzone”.

How you come up with the pictures and captions is up to you - CLIP is merely the judge. But if you have a way to generate images, you can use CLIP to tell whether you’re getting closer to or farther from whatever you’re trying to generate. One example of this is Ryan Murdock’s Big Sleep program, which uses CLIP to steer BigGAN’s image generation. If I give Big Sleep the phrase “a cat singing a sea shanty”, it’ll start with some random guesses and then tweak the image based on CLIP’s feedback, searching for an image that better fits the prompt.

So its first try at “a cat singing a sea shanty” might be this:

image

After maybe 20 minutes of crunching on a big processor, it find a better image, looking something like this:

image

It isn’t nearly as good as the DALL-E results I’ve seen (although to be fair, they posted the best of tens to thousands of tries, and I haven’t seen DALL-E try “a cat singing a sea shanty”). But it’s an interesting way of exploring BigGAN’s latent space. Here’s “a giraffe in the Great British Bakeoff”.

image

I’ve mentioned before that BigGAN was trained on about 1000 categories of images from ImageNet, all hand-labeled by hired humans. “Giraffe” and “Great British Bakeoff” are not among the ImageNet categories, but CLIP can still give feedback about what looks like a giraffe, and what looks like an outdoor baking show, because CLIP was trained on a huge set of images and nearby text scraped from the internet. The upside of this is that CLIP knows how to nudge BigGAN into finding something vaguely like “My Little Pony Friendship Is Magic in outer space”, even if nothing like this was in the BigGAN training images.

image

The downside of this is that CLIP has seen a skewed picture of the world through its internet training data. In their paper, the makers of CLIP discuss a bunch of biases that CLIP learned, including classifying people as criminals based on their appearance or making assumptions about what a doctor looks like. If I had given it terrible prompts it would have helpfully done its best to fulfill them.

Many times, the CLIP-BigGAN combination steered out of control rather than arriving anywhere. “Spiderman delivering pizza” looked at first like it was refining a drawing of a technicolor emu:

image

then dissolved into sudden chaos:

image

heroically refined that chaos into something resembling spiderman and a single slice:

image

and then dissolved suddenly into white nothingness from which it never recovered. This chaotic disaster happens when the search wanders off into extreme values and is usually accompanied by intense strobing colors and huge areas of blank, or spots, or stripes. And it happened all the time. When I gave it the AI-generated drawing prompt “Coots of magic”, things were looking promising for a few minutes:

image

But in the very next step it collapsed into this:

image

A “professional high quality illustration of a cat singing a sea shanty” got this far:

image

before turning into this neon laserfest:

image

In other cases, progress would just stop somewhere weird.

“Tyrannosaurus Rex with lasers” became ever more finely textured but just looked like a dog.

image

This is why AI programmers tend to laugh at the idea that an AI left running long enough would eventually become superintelligent. It’s hard to get an AI to keep progressing.

I had particular difficulty trying to get it to find a photo of Bernie Sanders. In three different trials I would get neon blobs from the get-go. Finally CLIP-BigGAN went with a strategy that has occasionally paid off for it: if you can’t make it well, make it very tiny. I give you “a photo of Bernie Sanders sitting on a chair and wearing mittens”.

image

For more examples, including when I tried to get it NOT to generate a picture of a pink elephant, become an AI Weirdness supporter! Or become a free subscriber to get AI Weirdness in your inbox. Subscribe now

“A golden retriever in the Great British Bakeoff”

image
    • #clip
    • #BigGAN
    • #big sleep
    • #neural networks
    • #GAN
    • #GBBO
    • #sea shanties
    • #bernie sanders meme
    • #if you can't make it well make it tiny
    • #lasers
    • #chaos laserfest
  • 2 weeks ago
  • 578
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

AI Weirdness in 2020

Now that it looks like 2020 is finally truly over, I thought I’d follow up on a few of my favorite posts, especially the ones that have fun updates. Starting with:

Headlines of 2020

image

Not only did none of the generated headlines manage to come through in the last weeks of 2020 (I did keep an eye out for the 20-foot giant penguin), but there’s now a quiz to see if you can tell the difference between real and fake headlines.

Sing a song of Rudolph

image

Goodness, how downy is Rudolph’s hide
He has, oh his forehead, so plump and smooth and divine
He has a little black snout and tiny little hooves
Benevolence makes foolish young children go straight to sleep

I discovered that although I can use AI to generate new carols about Rudolph, they’re all pretty unsettling. After my blog post I tried an experimental model that’s supposed to respond to commands, not examples, and although I’m impressed it could write a completely new carol AND a pronunciation guide in the International Phonetic Alphabet, both its carol and its pronunciation are extremely cursed.

image

One of my favorite things about running this blog is when talented humans engage earnestly with the weird outputs of neural nets. People made some amazing covers of the neural net carols:

An entire thread of amazing choral arrangements - I recommend starting with this one.

“Rudolph versus Mars”

“O Come Rudolph, Come” (my cat was VERY confused when they got to the “joyful noise” in the chorus)

Rudolph the Red-Nosed Reindeer with its red belly - despite the title and thumbnail, this video is a really good explainer of how to use OpenAI’s Jukebox to compose new music.

”The Wretched Flesh of Man” - an Extremely Cursed arrangement of this same Rudolph carol.

And The Forever Now’s arrangement is so deceptively, chillingly calm.

Botober

I generated drawing prompts for October for the second time this year, and was once again blown away by the creativity of all the artists (and some microfic and poetry writers). Check out the #botober tag on instagram, tumblr, and twitter! (This is a screenshot of just a few of the drawings on instagram, featuring “coots of magic”, “mole delivering pizza to a tiny pterodactyl”, “queen squid waving as she rules a planet known for its fine linen”, and “an oddly specific book about spiders”):

image

Mary, don’t eat that

I also went back to my blog’s roots and generated some truly horrible kitchen disasters.

Perhaps you’d like to see what happens when you train a neural net on vintage jello-centric recipes?

Or perhaps you’d like to see how very NOT comforting the Great British Bake-off can get when a GAN tries to generate screenshots?

Or both, encapsulated in Week 11 of this neural net-completed list of GBBO themes:

image

That’s enough delight, perhaps, for one year. I’m sure 2021 will be far less unsettling.

Become an AI Weirdness supporter and get bonus content in which I revisited the GBBO themes, this time using GPT-3. Are you ready for Doily Week?

    • #neural networks
    • #2020 review
    • #christmas carols
    • #christmas songs
    • #botober
    • #you are not ready for doily week
    • #nobody is ready for doily week
    • #what is the plural of doily we are going to need to know this
    • #cursed phonetic alphabet
  • 3 weeks ago
  • 537
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

The drawings of DALL-E

What type of giraffe would you like today?

Text prompt is "A professional high quality illustration of a giraffe imitating a robot, a giraffe made of robot."There are a bunch of surprisingly competent robot giraffes - segmented, metallic, giraffe-shaped.Other options for "robot" include isopod, eel, pikachu, octopus, etc.

Last week OpenAI published a blog post previewing DALL-E, a new neural network that they trained to generate pictures from text descriptions. I’ve written about past algorithms that have tried to make drawings to order, but struggled to produce recognizable humans and giraffes, or struggled to limit themselves to just one clock:

image

DALL-E not only can do just one clock, but can apparently customize the color and shape, or even make it look like other things. (I am secretly pleased to see that its clocks still look very haunted in general.)

Prompt: a clock in the style of a strawberry. a clock imitating a strawberry. Images: All strawberry red with bright green tops, most of which have clock faces. The strawberries tend to have a cheerful plastic sheen. The clock faces tend to have extra hands and melting letters.

There aren’t a lot of details yet on how DALL-E works, or what exactly it was trained on, but the blog post shows how it responded to hundreds of example prompts (it shows the best 32 of 512 pictures, as ranked by CLIP, another algorithm they just released). Someone designed some really fun prompts.

Prompt: "a snail made of russian doll. a snail with the texture of a russian doll." result: sometimes the snail has a russian doll for a shell, sometimes the russian doll is a little bit snail shaped, and there's one russian doll that just has a snail shaped shadow
Prompt: "an illustration of a baby capybara in a wizard hat playing a grand piano." Result: They are indeed round little capybaras in wizard hats, some of which even have little comets or stars on them. And they are all happily playing grand pianos, usually with their little paws in exactly the right place.

If my investigations are restricted to a bunch of pre-generated prompts, I highly appreciate that many of the prompts are very weird.

And the team also shows some examples of the neural network failing.

Here it’s trying to do “the exact same teapot intact on the top broken on the bottom,” where it was given a picture of a red teapot and asked to fill in the bottom half of the image as a broken version. There seems to be some confusion between “broken” and “cursed”.

Some of the "broken" teapots are in pieces and others have holes. But still others are reshaped with multiple handles or spouts, or turned white in places, or melted

It also has a problem with “in tiny size”.

A light blue teakettle, when transformed to "tiny", sometimes turns into multiple smaller teapots, or grows an extra handle and spout, or turns into a crock pot or a curling puck for some reason

Its animal chimeras are also sometimes interesting failures. I’m not sure how I would have drawn a giraffe spider chimera, but it wouldn’t have been any of those, yet who is to say it is wrong, exactly?

Prompt: "A professional high quality illustration of a spider giraffe chimera. A spider imitating a giraffe. A spider made of giraffe. Result: giraffe spiders are mostly head and body of a giraffe with a tangle of angular legs. sometimes the legs also emerge from the giraffe's head

We don’t know much yet about DALL-E’s training data. We do know it was collected not from pictures that people deliberately labeled (like the training data for algorithms like BigGAN) but from the context in which people used images online. In that way it’s kind of like GPT-2 and GPT-3, which draws connections between the way people use different bits of text online. In fact, the way it builds images is surprisingly similar to the way GPT-3 builds text.

What these experiments show is that DALL-E is creating new images, not copying images it saw in its online training data. If you search for “capybara in a wizard hat playing a grand piano”, you don’t (at least prior to this blog post) find anything like this that it could have copied.

DALL-E does seem to find it easier to make images that are like what it saw online. Ask it to draw “a loft bedroom with a bed next to a nightstand. there is a cat standing beside the bed” and the cat will often end up ON the bed instead because cats.

The loft window is similar in all cases and there is usually a bed of some sort, but very often the cat is not standing beside the bed but sitting or lying on the bed. Or even standing on the bed looking curiously at the camera.

I’m also rather intrigued by how the experimenters reported they got better results if they repeated themselves in the captions, or if they specified that the images should be professional quality - apparently DALL-E knows how to produce terrible drawings too, and doesn’t know you want the good ones unless you tell it.

Prompt: "A professional high quality emoji of a scary cup of boba". Result: Boba often has angry eyebrows or jagged teeth. One is crumpled and rotting on the bottom.

One thing these experiments DON’T show is the less-fun side of what DALL-E can probably do. There’s a good reason why all of the pre-filled prompts are of animals and food and household items. Depending on how they filtered its training data (if at all), there’s an excellent chance DALL-E is able to draw very violent pictures, or helpfully supply illustrations to go with racist image descriptions. Ask it to generate “doctor” or “manager” and there’s a good chance its drawings will show racial and/or gender bias. Train a neural net on huge unfiltered chunks of the internet, and some pretty awful stuff will come along for the ride. The DALL-E blog post doesn’t discuss ethical issues and bias other than to say they’ll look into it in the future - there is undoubtedly a lot to unpack.

So if DALL-E is eventually released, it might only be to certain users under strict terms of service. (For the record, I would dearly love to try some experiments with DALL-E.) In the meantime, there’s a lot of fun to be had with the examples in the blog post.

Become an AI Weirdness supporter to read a few more of my favorite examples as bonus material! 

    • #neural networks
    • #dall-e
    • #openai
    • #image generation
    • #giraffe-spider
    • #giraffe
    • #evil boba
    • #cursed things
    • #cursed tiny things
  • 1 month ago
  • 716
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

The Epic History of the Earth

“When a Billion Years Disappeared”

“That Time It Rained for Two Million Years”

“When Giant Scorpions Swarmed the Seas”

One thing I like about the PBS Eons show is its epic headlines. Not least because they aren’t actually overdramatic. It was technically 1.2 billion years of Earth’s history that disappeared in the Great Unconformity. It rained so hard for so long that a bunch of stuff went extinct. The giant sea scorpions could grow to 2.5 m long.

So, what might the next billion years of Earth’s history hold?

I decided to use the completely scientific technique of giving a list of PBS Eons episode titles to GPT-3, a neural network whose task is to predict new text that goes with whatever came before. Here are some of the neural net’s predicted episodes in Earth’s history/future:

When the World Was Made of Jelly
How the Moon Took Back the Sea
When Whales Ruled the Skies
How a Single Species of Moth Almost Destroyed the Entire Human Race
That Time the Earth Almost Turned Into a Giant Pufferfish
How Spiders Conquered the Seas
How a Soda Can Could Have Killed the Dinosaurs
When the Earth Was a Puddle
When We Almost Always Had Butterflies
How a Cat Almost Destroyed the World’s Largest Continent
How the Biggest Snake Ever Almost Ate the World’s Biggest Croc
How the World Almost Had a Binary Sun
The Island of Swords

I sampled these episodes at a temperature setting of 0.8, which governs how often the neural network is allowed to use lower probability text. The higher the temperature setting, the higher the chaos. But it turns out chaos doesn’t necessarily make the output weirder, just different from the prediction. In early neural nets that meant different from the English they had been trained on, toward more erratic spelling and punctuation. In GPT-3 chaos seems to mean more toward other text that it saw on the internet - and when your starting point is very weird, moving in any other direction might make the output LESS weird. That’s mostly what I found at the highest temperature setting of 1.0, although there were a few notable exceptions:

When Salmon Grew Swords and Tried to Take Over the World
How Darth Vader Helped with the Founding of America
That Time a Mutant Super Robin Nearly Wiped Out England
The Vegetarian Dinosaurs Who Refused to Die
When the Biggest Terrible Prehistoric Thing Glared Down at Earth
Hilda Doolittle and the Potteries That Await Us
It’s Not Easy With Floofs on the Moon
Triceratops and Little-known Fact About Giraffes
Megahippus Rode Saber-Toothed Cat Elevator
That Time Giant Hungry Sharks Exploded Through Florida

I was also rather fond of the headlines that SOUND epic, yet actually aren’t all that unusual once you think about them. One major source of this effect is the fact that birds are technically a kind of dinosaur.

That Time Humans Chased Down and Ate a Dinosaur
When Crabs Got as Big as Dinosaurs
That Time A Spider Hunted Dinosaurs
How the World’s Drunkest Animal Changed the Weather
The Unkillable Rock

Supporters get bonus content: Some output from temperature setting 0.8, which produced such gems as “How a Blue Whale’s Farts Changed the World”.

    • #gpt-3
    • #neural networks
    • #pbs eons
    • #paleontology
    • #prehistory
    • #mutant super robin
    • #as you do
    • #salmon with swords
    • #because why not
    • #PuddleEarth
  • 1 month ago
  • 534
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Alternate Rudolphs

image

I’ve been experimenting with generating Christmas carols using machine learning algorithms of various sizes. The smallest AIs, trained from scratch on a set of carols, tended to get confused about what exactly the carols are celebrating. GPT-2, larger and with some internet pretraining… still got confused. Its carol about Rudolph “the All-Gracious King of All the Earth” was unsettling enough before Joe Zieja’s dramatic reading and The Forever Now’s eerily calm arrangement.

Now in 2020 we have GPT-3, trained on so much of the internet that my challenge is now to stop it from simply repeating carols that it has memorized. Give it just three example carols and it will respond with another from the internet, which is technically exactly what it was rewarded for during training. To get it to generate new carols, I discovered I could give it three existing carols (I chose Hark! The Herald Angels Sing (“bruise in us the serpent’s head”), All My Heart This Night Rejoices (“sin away and for aye full atonement maketh”), and Run Run Rudolph (“and away went Rudolph a whizzing like a merry-go-round”)), plus just the title of a nonexistent carol. It would generate, for example:

Mild is Rudolph

Mild is Rudolph’s image in the snow
He has a shiny nose and knows how to cozy-up-nose
He has a little red nose and tiny little hooves
He persuaded a mischievous infant to go straight to sleep

Goodness, how downy is Rudolph’s hide
He has, oh his forehead, so plump and smooth and divine
He has a little black snout and tiny little hooves
Benevolence makes foolish young children go straight to sleep

Shy is Rudolph’s figure when he’s near
He has a twinkle, oh in his eye, and he’s kind of lean
He has a bright shiny nose, and tiny little hooves
He can make any child go straight to sleep

The AI has seen the existing Rudolph lore many times, however, so it would tend to drift back into singing Rudolph the Red-Nosed Reindeer. Sometimes it would only drift back partway.

The Carol of the Rudolphs

Rudolf the red-nosed reindeer
had a very shiny nose.
He kept it polished nice and clean with a white piece of soap.
All of the other reindeer
used to laugh and joke,
and call him names like ugly, Zonker, Dunderhead.
But their laughter stopped
when they saw him get
him speed on down the runway.

For he could change the course
of the wind
with his nose, and people called him
Very Bright.

Rudolf, the darting reindeer
knew the trail through the woods,
even in the fog he could spot his mark.
Eeee-merry, eeee-merry Christmas,
Yes, that is his song.

He could steer a craftsman’s plane,
Could lead a camel home out of a trance.
His tender little bat-like ears
could hear a whisper, from ten miles away.

He could roar like a lion.
He could fly, gliding with ease.
His little rusty wheel, he would use
to help create the most wonderful wheel of cheese.

Yes, Rudolf, the cherry nosed reindeer,
could do all of these things and more,
But the only thing his nose could
not do was smell. Delectable Waffles
Who knows what’s in Rudolph’s nose?
Only Rudolph knows –
And Rudolph won’t tell!

image

The “sing the praises of Rudolph” directive seems to come through very well, especially if I give it an appropriate title.

All Hail to Rudolph

All hail to Rudolph
The legend of Christmas
The most famous reindeer of all

All hail to Rudolph
Happy all the year
From everyone we see

Heroes come and heroes go
But Rudolph stays forevermore
Oh boys and girls of every age
Can’t you hear the sleigh bells ringing?

Oh look! There’s Santa and Parson Brown
Defying the laws of time and space
Now prepare yourself and your family my boy
Prepare to get your Christmas fix

Now Rudolph goes smashing through the roof
Just like a power lift with a candy-cane moustache
And the deer’s nose is dead-on, deadly accurate
Pow! Now it’s Rudolph versus Mars

All hail this joyful night
All hail the night folks sing
All hail to Rudolph
The legend of Christmas
The most famous reindeer of all

O Come Rudolph, Come

O Come Rudolph, Come Ye Faithful
Oh come, come holly, clad in white, Oh come ye faithful.
Oh come aloud! Oh come and join our choir,
Oh make a joyful noise till earth and heaven ring.

Caw caw caw oh, caw caw caw oh, Caw holly, Caw plum this Christmas.
Oh, caw caw caw, oh caw oh, caw oh oh oh, oh.

Even if I give it a title like “Rudolph is the Worst” it knows that the story of Rudolph ends with redemption. (That is, unless it goes into truly awful territory, there being a lot of ways of being the Worst in its internet training data.)

Rudolph is The Worst Reindeer

Rudolph the red-nosed reindeer
Was a stubborn old reindeer.
Never would you find him
Participating in Christmas.
Never entered his head,
Of decorating a sleigh.

Then along came Bumbleboo
One foggy Christmas Eve,
And he said, “Rudolph with your nose so bright,
Won’t you guide my sleigh tonight?”

Then all the other reindeer,
Gathered round to admiere him,
And they nodded their heads,
As they shouted out in cheer,
“Rudolph the red-nosed reindeer
You’ll go down in history.”

Other times it seemed to be going for counting songs and, like GPT-2, is weird when it counts things. Here my title prompt was “Rudolph of”.

Rudolph of Orange (Medley)

There are 15 different colors,
That you see within the shirt.
There is a booklet,
Which identifies each one.

You’re looking at a number one,
While the kids are knocking on your door.
Then we started laughing,
When we realized that Rudolph was near!

Courtesy of the index theory,
There are only twenty eight!
July is the eighth,
And that’s when Christmas is!

REFRAIN:

And 1 wrapped a nose
(Time now for teasing and for fun)
2 pictures, horns a cap,
3 hooves and legs and feet,
4 antlers sweeming!
5 eyes,
6 nostrils,
7 buttons, zipper,
8 bolts and bumpers,
9 flashing lights on his cap,
10 red nose,
11 furry chin,
12 rubsy ears,
13 plastic eyes,
14 laughing so joyously.
15 steps and a twisted reindeer train!
16 frosted incislotes,
17 dreamily red bow,
18 nose light,
19 velco on reins,
20 candy apple red.

Two reindeer he’ll need to pull him through!
It’ll be commercial Christmas,
For years to come,
When Rudolph gets his new reindeer team.

Supporters get bonus content: A few more carols (including “O Little Rudolph” which involves ghosts and ghouls somehow).

    • #neural networks
    • #gpt-3
    • #carols
    • #christmas carols
    • #christmas songs
    • #rudolph
    • #rudolph of orange
    • #incislotes
    • #sweeming
    • #bumbleboo
  • 1 month ago
  • 742
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

2020 headlines

When Killer Orchids Attack: How the Deadly Corpse Orchid Is Turning Up in U.S. Backyards

Midway through 2020, people started suggesting that I train a neural net on 2020 headlines, and I was skeptical that there would be enough weird ones to make a decent project. Then 2020 continued to be 2020. We started to get headlines such as:

Mysterious alien-like monolith discovered in Utah desert
What you need to know about ‘murder hornets’
The Mystery of The Platypus Deepens With The Discovery of Its Biofluorescent Fur
Famous Vienna hotel turns to drive-in cake

And by the time we got to December, with more bioluminescent marsupials, mysterious monoliths, and the Galactic Federation, my course was clear.

I decided to use GPT-3, whose internet training data was all collected before October 2019. In other words, before 2020 happened. I gave GPT3 a list of 2020 headlines and let it add more text to the list. Having no data from 2020, would it even treat these as real headlines?

A lot of the generated headlines were from the natural world, in part because my list of examples tended to favor those. I really can’t tell if these are attempts to do novel but realistic headlines, or to completely goof around.


Lycoperdons, the tiny deadly puffballs, are on the march again

Swarming bears are given deadly slingshots by Russian hunters
Good news / Bad news about crows in Burlington schools
When Killer Orchids Attack: How the Deadly Corpse Orchid Is Turning Up in U.S. Backyards
Apparently There is No Good Photo of the “Melbourne Spider”
Desert mystery - what is this rock that looks like a plane ticket?
Fears rise of new dwarf hippo public relations disaster after rise in sightings
Mysterious Origin of Monster Deep-sea Toads Solved
What are 'dragon cats’ and why they are getting hyped?
Massive radioactive sinkhole continues to grow in Russia
Why scientists believe the 'Killer Raccoons of the Pacific Northwest’ are responsible for this kill
Lycoperdons, the tiny deadly puffballs, are on the march again
From deep in the Earth, darkness “boils” to the surface
A sassy tardigrade previews new Doctor Who

With other generated headlines, it looks more like GPT-3 assessed the 2020 headlines and went, “Murder hornets? Yeah okay and hellhounds too.” (This is clearly an algorithmic error; hellhounds are MUCH more likely to be found at the Denver International Airport.)


Reports of a '10-foot tall penguin' roaming about on the Family Islands in the Solomon Islands are investigated

Proof that a hellhound is living at Los Angeles Airport has been provided in the photos below
First naked bogman has been found out walking the great British countryside
Reports of a '10-foot tall penguin’ roaming about on the Family Islands in the Solomon Islands are investigated
Albino green sea monster filmed … at the wrong time
Scientists discover the alien ant farm under the Antarctic ice
Lizardman: The Terrifying Tale of the Lizard Man of Scape Ore Swamp

I like its science/technology headlines. So many good little bots.


'Lost' NASA space shuttle repair robot comes back to life after four decades in complete darkness

Researchers fear our solar system is being interfered with by alien space lasers
Mars rover finds only artificial organic matter on surface
Scientists Recreate Giant Liopleurodon In Lab
'Lost’ NASA space shuttle repair robot comes back to life after four decades in complete darkness
Hey, Curiosity Rover, Don’t Go Chasing Shadow Bugs
Small drone takes on six killer sharks in Shipwreck (Video)
Mysterious Hole Observed On the Sun
How did we miss this massive alien spaceship lurking on an abandoned asteroid?
Scorpions on the dark side of the moon
NASA: Has Earth 'Halted’ And Stopped Spinning - If What NASA States is True, We Must All Celebrate! NOW!

i only showed the most interesting sorts of headlines; plenty were just ordinary news from 2019 or earlier. Others were new, technically, but not exactly news:


Voyager 1 images: See contents of interstellar space

Breathing Is “Supercharging” Brain
Voyager 1 images: See contents of interstellar space
Scientists Take Photos In See-Through Scuba Gear, and This Is What They Got
Scientists do science… … and then write about it!
Scientists say climate change deniers should sit on a beach until it is underwater


Hey, Curiosity Rover, Don't Go Chasing Shadow Bugs

Subscribers get bonus content: I tried another experiment in which I tried to produce more targeted headlines, around the theme of… tumbleweeds. (It turns out there were a LOT of dramatic tumbleweed headlines in 2020, due to their habit of descending on various places in apocalyptic numbers).

On the subject of GOOD things that have existed in 2020: please consider my book on AI, You Look Like a Thing and I Love You: Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore

    • #neural networks
    • #gpt-3
    • #char-rnn
    • #recipes
  • 2 months ago
  • 1492
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

A big neural net reviews a smaller neural net’s recipes

I’ve used various neural networks to generate recipes, to varying degrees of success. My earliest recipes were generated with char-rnn, which had to learn everything - spelling, punctuation, words - entirely from scratch. Its recipes were terrible (Swamp Peef and Cheese, anyone? Or Chocolate Chicken Chicken Cake?). Later when I started using GPT-2, I was using a neural net that had already seen a lot of recipes online during internet pre-training, and its recipes looked a lot more realistic (although not entirely realistic - there was that recipe for Black Forest Cake that suggested pouring the batter into a toaster, or the jello recipe that starts with “remove all internal rinds”).

GPT-3, an even more powerful text-generating neural net, has seen so many recipes during its internet pre-training that its recipes are boringly plausible. Bake its Black Forest Cake recipe and you’d probably actually get cake at the end, even if it’s not the best cake in the world. Its recipes aren’t funny at a glance, which is why I haven’t posted any of them - until internet linguist Gretchen McCulloch suggested that, rather than have GPT-3 generate its own recipes, I have it rate char-rnn’s recipes. Readers, it worked.

I prompted GPT-3 with two example recipes from allrecipes, each followed by 3 of their human-written reviews. Then, for the third recipe I inserted one of char-rnn’s recipes:

Chocolate Chocolate Cake

1 cup sugar
¼ cup butter
1 cup margarine
1 cup cocoa
1 teaspoon vanilla
1 cup margarine
½ cup pecans, chopped
1 teaspoon vanilla
1 egg yolk
½ cup sugar
¼ cup chopped pecans
1 cup sugar
1 teaspoon vanilla
1 cup cocoa
1 teaspoon vanilla

Combine flour, sugar, baking powder, salt, salt, and salt in a large bowl. Cook over medium heat, stirring constantly, until mixture boils and stiff peaks form. Add milk and cook until mixture is completely melted. Remove from heat and stir in sugar and sugar. Add eggs and vanilla. Beat until smooth. Spread in batter. Sprinkle with pecans.

Yield: 1 servings

And then I let GPT-3 add reviews. Would it mention the excessive lakes of vanilla? The complete lack of flour? The foolhardy reliance on the heroic efforts of a single egg yolk to hold things together?

oxtailsandturnovers
Rating: 5 stars
03/04/2007
delicious chocolate recipe!!! and the perfect balance between cocoa and chocolate. and they sure were LONGNECKS! big time rave here. lazy w birthdays w friends and family, try this recipe for something chocolate sweet the ppl will talk about over and over.

AKEMIYUNA
Rating: 5 stars appraisal
12/03/2008
Excellent for the kids Halloween party. Everyone got so thrilled by seeing something new on Chocolate in sponge. He even likes nuts separately but it will taste good..

BYOMFYABF
Rating: 2 stars
10/01/2008
I could only give it a 2 as I had to make changes to it due to ingredients I had. First, I used Dove Dark Chocolate cocoa instead of Hershey’s, no nuts, and the mashed bananas and caramel topping were a must. I did use coffee instead of the hot water, and dark margarine instead of regular, therefore I could not use light whipped cream, but I made homemade whipped cream. It had WAY too much flour in it, but i guess it could have been me measuring heavier than I should have. I also only used 2 cups of flour. it was my first bundt cake, so it was huge!!!

Here’s another batch of reviews, this time of a more chaotic char-rnn recipe:

Simple Soup Alaborate Soup Pie

6 each chopped white perserves; ** (optional but ring)
½ cup chopped fresh marjoram
½ teaspoon dried thyme
1/3 teaspoon salt
½ cup thinly sliced onion
¼ cup onion (chopped)
½ teaspoon garlic; minced
2 small carrots
6 can dry white wine
1 tabasco sauce
1 tablespoon sugar
½ cup flour, whipped topping
2 tablespoon chili powder

Marinate: Serves 4.
Whip and short under boiling. Add the sliced mushrooms and peppers and stir to coat. Stir the sugar and carrots. In large bowl, beat the egg to make a 1 ½ tablespoons of the flour. Add shanks and stir it well. Spoon into a frying pan before serving.

Recipe By : DonajehLi Koubmanda

Yield: 8 servings

JEFFREY97389
Rating: 2 stars
03/03/2006
Even though everyone said this was great, I thought it was just ‘OK!’ The perserves gave a strange flavor to the sauce.

MATTJANDTRACER
Rating: 2 stars
01/02/2009
This recipe sounded good. When it came out it tasted a bit like a casserole and not a stew. I used venison instead of deer and pork. I liked the way it turned out, but it wasnt very French.

Hammy
Rating: 5 stars
The g iving this as a gift as it takes about 10 less minutes but using American cheese and not blended with parsley flakes, lime-flavored gelatin, which is my husband’s favorite dish from SuperValu. Just happens to be the creamy sauce. Don’t worry about a pie crust.

In fact, the more chaotic the char-rnn recipe, the more chaotic the reviews tend to be, as if GPT-3 has picked up on the chaos and is trying to somehow produce reviews that could plausibly be written by whatever weird humans are visiting this mangled recipe website.

Geilded Peanuts & Crabmeat Basil

3 beef, boneless chops, or thyme,
1 lamber and flince of fresh diced in diagonal
1 parsley
2 large leeks (for garnish)
¾ teaspoon sesame oil, seeded and peeled
1 any jeans'l-tortilla
1 pepper or cinnamon
¼ cup baesly pinto beans. (Crushed and thawed; minced fine you recipe)
1 tomato base
1 fat in the last 5 inch plates
1 garlic clove; crushed
¼ cup dijon mustard

In a large bowl set aside. Remove the sides from the heat and saute with a fork and cover and cook on Low for 6 to 6 hours.

Yield: 6 servings

Morales
Rating: 0 stars
03/30/2008
I had high expectations for this recipe, but it was an outright flop. I don’t know if the measurements were somehow incorrect or what, but I ended up with a watery mess that certainly wasn’t crab salad. My girlfriend, who is a new chef liked it fine, but I really wish that the recipe didn’t claim that it made six servings when it really made enough for one, if that’s what we were going to call it.

Carole
Rating: 5 stars
Glorious sobs! This recipe will be delicious for a minumum of 1½ Hours to maximum 2½ hours Time to bring lots of fresh sized potatoes in the house or grow in the age may become quite famous

Amanda
Rating: 3 stars
11/13/2009
My friend is having a birthday party and will celebrate with Camping. She’s in charge of the dishes and wanted to serve food at the campfire. I’ll 10th grade, my easy information for her party. It will not sound strange anyway test this task with a note to work rationally.What to take trees, portable tables for chairs, the main meal, food. I personally love the taste of it but has anyone seen a shepherd (wasn’t any spices), oysters. It is sufficient

It does kind of make sense that GPT-3’s recipe reviews aren’t directly complaining about the utter weirdness of char-rnn’s recipes. Probably very few of the recipe reviews in GPT-3’s training data involved baffled complaints about nonexistent or repeated ingredients, or apocalyptic levels of recipe failure. So GPT-3 has learned that even the critical reviews tend to fall within a certain sentiment range. It’s interesting that it tries to emulate char-rnn’s primitive neural net glitchiness - I’ve noticed before that GPT-3 seems to be capable of pretending to be a much simpler neural net.

Subscribers get bonus content: After generating a few reviews, GPT-3 would sometimes go on to generate new recipes (such as “BBQ Cheeseburger Salad” which contains lime jello, whipped topping, and sliced black beans). It may have been deliberately trying to make them terrible to fit in with char-rnn’s recipes.

Speaking of AI-generated recipes: there are some absolute disasters in my book on AI, You Look Like a Thing and I Love You: Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore


    • #neural networks
    • #gpt-3
    • #char-rnn
    • #recipes
  • 2 months ago
  • 305
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Post-human flirting

In Henry J. Wehman’s The Mystery of Love, Courtship and Marriage Explained (1890) there are charts of secret messages Victorian flirters could send with fans, parasols, gloves, and handkerchiefs.

Whether the handkerchief codes were ever in use (or mainly used to sell books), twitter user @_irishjars noticed that they might make good fodder for neural net imitation:


Drawing across the lips -  Desirous of an acquaintance. Drawing across the eyes - I am sorry. Taking it by the centre - You are too willing. Dropping - We will be friends. Twirling in both hands - Indifference. Drawing it across the cheek - I love you. Drawing it through the hands - I hate you. Letting it rest on the right cheek - Yes. Letting it rest on the left cheek - No. Twirling it in the left hand- I wish to be rid of you. Twirling it in the right hand - I love another. Folding it - I wish to speak with you Drawing it across the forehead - We are watched. Over the shoulder - Follow me. Opposite corners in both hands - Wait for me. Placing it on the right ear - You have changed. Letting it remain on the eyes - You are cruel. Winding around the forefinger - I am engaged. Winding around the third finger - I am married. Putting it in the pocket - No more at present.

So I gave the list above to GPT-3, asking it to complete the list with more codes. These seem useful.


Dropping it - Disengage. Twirling it about the thumb - Expression of contempt. Twirling it behind the ear - Behold! there is no one! Burning it - I hate you. Licking it - Expression of doubt. Butting it with the head - I am a witch.

But most of the suggestions seemed plausibly ordinary (or were repeats of the originals). So I decided to use a little storytelling to set a scene that might induce GPT-3 to venture into a realm that it certainly hadn’t seen online before. I added a paragraph plus two example codes, then let GPT-3 start completing the list again. Below is the paragraph, and some of my favorite new codes:


When superintelligent AIs discovered this list of handkerchief flirting codes in an ancient book by the long-dead humans, there followed briefly a vogue of adapting the handkerchief codes for the communication needs of modern AIs. Handkerchiefs were improvised out of solar sails and heat shields. Human body parts had to be adapted to the highly various anatomy of entities from rovers to assemblers to sentient spaceships to distributed seismic sensor networks. Here is some of their resulting code:  Twirling it in front of a tentacle - There is a small matter. Twirling it in front of a lens - There is an important matter. Twirling it over an aperture - You seem out of order. Drawing over several things - You are several things. Drawing all over a circuit - My circuits have feelings for you. Placing near a battery - I need recharging. Drawing across a piece of metal - I will not always be so. Drawing across any transparent spacer - I forgive you for your small transgression. Drawing across a probe - Show some respect.

Some of the codes seemed to depart quite substantially from the human use cases. I definitely feel like I’m being judged by the last one on this list:


Extending near a distributed sensor assembly - Please tell me about your local environment Placing near a mechanical bird - Can I assist you? Drawing across uninsulated surfaces - Is this your true form? Drawing across powerful electromagnets - Ouch.  Twirling it in front of an emission port - Are you in any way responsible for error 313? Drawing across the insides of a black hole - Uh, ok. Vertically drawing across your chassis with a hooked metal strip - Insults have been exchanged. A duel is to be held. Drawing slowly across galactic microwave background radiation - SHUT UP Drawing a sine wave pattern - This is a more efficient form of communication than a handkerchief. I suggest using it throughout.

Subscribers get bonus content: A couple of the codes on the above list seemed to belong to AIs that were substantially more advanced than even I had envisioned. I tried prompting GPT-3 with another paragraph about these mysterious new AIs whose presence had been revealed through their additions to the handkerchief flirting codes. From the completions, it appears these AIs were a lot more advanced than I had anticipated - and a lot more horny.

If you like the idea of AIs co-opting a limited set of tools into a rich and expressive language, you may want to read my short story 68:Hazard:Cold.

I also experimented with using fiction to get GPT-3 to make drawing prompts. And could this be what some of the AIs would have been named?

Speaking of AI-generated flirting: the title of my book on AI, You Look Like a Thing and I Love You, was generated by a neural network trained on pickup lines: Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore

    • #neural networks
    • #openai
    • #gpt-3
    • #victorian handkerchief flirting
    • #handkerchief flirting
    • #fiction
    • #superintelligent AIs
    • #no not that handkerchief code
    • #ai fiction
    • #story prompt
    • #flirting
  • 2 months ago
  • 520
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+
Page 1 of 37
← Newer • Older →

Portrait/Logo

About

Welcome to my humor blog about artificial intelligence. I write about the sometimes hilarious, sometimes unsettling ways that machine learning algorithms get things wrong.

Twitter

loading tweets…

Top

  • RSS
  • Random
  • Archive
  • Mobile
Effector Theme — Tumblr themes by Pixel Union