AI Weirdness

  • About Me
  • Contact me
  • FAQ
  • Merchandise
  • My Book
  • Writing
  • Random
  • Archive
  • RSS

AI recipes are bad (and a proposal for making them worse)

pan of brownies in the oven

Pictured above is an abomination in the making, a lesson in why humans should never trust what a neural net says just because it’s based on math. It’s a neural net generated brownie recipe called Chocolate Baked and Serves, and its distinguishing feature is the CUP OF HORSERADISH it contains. It was so bad that my eyes watered as I removed it from the oven.

Creating a functional recipe requires an intimate understanding of ingredients and their interactions with one another, of heat and spice and time. Today’s neural networks have none of that.

I’ve seen neural net recipes that call for crushed sherry or 21 pounds of cabbage. One of my personal favorites is a recipe called “Small Sandwiches” that called for dozens of fussily chopped, minced, and diced ingredients - before chucking them in the food processor for 3 hours. Part of the problem has been neural nets with memory so terrible that halfway through the recipe they forget they’re making cake.

More recent neural nets like GPT-2, given better long-term memory and extensive training on a huge portion of the internet, can make recipes that are more likely to pass for the real thing. Use talktotransformer.com to prompt GPT-2 with “Black Forest Cake. Ingredients:” and the quantities and ingredients will be reasonable, even if the whole thing doesn’t quite work (generating a few examples in a row, I saw some Black Forest recipes that called for kneading the batter, and one that suggested pouring the batter into a toaster).

It’s seen enough cake that it can lean heavily on human-written recipes. If I prompt it with weirder recipe names - say, recipe names generated by a neural net - things get even stranger.

Crock Pot Cold Water

Ingredients:
Two 4-oz cans full of water
½ cup cornstarch
½ cup granulated sugar
3 tblsp. baking powder
1 tblsp. vanilla extract

Method:
Place water in the bottom of the slow cooker.
Place cornstarch, sugar and baking powder in a small bowl. Heat water over high heat in the slow cooker, whisking until dissolved and thick. Pour over cornstarch mixture. Add vanilla and whisk.
Place butter and lard into a small bowl. Whip together with a whisk until smooth.
Pour butter mixture over cornstarch mixture in slow cooker.

NOTE: by my reckoning this produces a very gluey, greasy mess, tasting strongly of vanilla and baking powder.

Chocolate Chicken Chicken Cake

Ingredients:
-2 cups all purpose flour
-½ teaspoon salt
-1 cup of dark brown sugar
-2 eggs
-1 teaspoon vanilla extract
-2 tablespoons vegetable oil
-¼ cup milk or other liquid if needed (it will make the cake softer if it is too warm)
-1 large chicken. Mix all ingredients and refrigerate for 30 minutes or overnight. Preheat the oven to 350 degrees.

-Gather the chicken by holding it by the top and both sides with both hands, just like you do to make meatloaf, then gently pop it into the casserole. Bake for 40 minutes or until chicken is cooked through.

-Let the cake cool completely before eating,

NOTE: if only it had decided to leave out the chicken, rather than the chocolate

Completely Meat Circle

Ingredients:
three rashers of bacon, two pieces of bacon, a whole piece of ham, two slices of smoked cheddar, four slices of prosciutto, salt, pepper, allspice, black pepper, onion powder, smoked paprika and paprika, garlic powder, paprika, parsley flakes and coriander.

You can also toast your own bacon at home: Wishing to pay homage to the original KFC’s recipe, we had one of the new wraps try to land on the BACON HOLE.

NOTE: the neural net doesn’t specify what to do with all that meat, and perhaps it’s better this way. Nor, again mercifully, does it explain what it means by BACON HOLE.

So, AI recipes are bad. How much worse could they get?

Let’s find out.

Here is a google doc form where you can submit the worst, the weirdest, the most gelatinous, that vintage American cooking has to offer. When I get enough, I’ll train a neural net to imitate them.

Already, humans have submitted “Beef Fudge” (contains marshmallow, chocolate chips, and ground beef), “Circus Peanut Jello Salad,” and “Tropical Fruit Soup” (contains banana, grapes, and a can of cream of chicken soup). God help us all.

Bonus content: Pitting neural net against neural net, I gave GPT-2 a few more classic neural net generated recipe titles to complete.

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #recipes
    • #gpt-2
    • #BACON HOLE
    • #jello
  • 3 days ago
  • 1413
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Depth of field fails

Janelle with an orchid in front of her face and a cat beside her. The cat’s head, and any part of the orchid that does not overlap her head, is blurred.

You might think of AI as a thing that only runs on powerful cloud computing servers, but these days you’ll also find it in your phone. Unlike the AI in science fiction, the AI in your phone is limited to just one task at a time - predicting the next word in your sentence, or transcribing your voicemails, or recognizing your speech.

One of those tasks? To make your smartphone camera pretend to have a much better lens than it actually does.

In professional portraits, taken with a fancy camera, you may notice that the background is artfully blurred - that’s because they use a camera lens with a very narrow depth of focus, so that just the subject’s face - sometimes just their eye - is in focus. These lenses can get expensive, with a lot of heavy precision glass, and they also make it tricky to focus on just the right spot, with just a tiny margin of error turning a face into an out-of-focus disappointment.

So, smartphones use AI to cheat.

With its knowledge of how people and objects are generally shaped, a a machine learning algorithm does its best to figure out where the human is, puts the human face in focus, and then artfully blurs the background. Depending on the phone type, it might be called something like portrait mode, live focus, or selfie focus. When it works, it looks like this:


“Blur my background for all calls” mode in Skype. Janelle’s head and shoulders are in focus and the background is blurred out.

Notice how my face and arms are equally in focus, but the background is very blurry - without AI’s sleight-of-hand, the background would be sharper, or else my arms would be noticeably out of focus.

For this post I used Skype’s “Blur my background for all calls” mode, and used my computer’s webcam for a camera. This way I could be sure it was relying 100% on AI analysis of a 2D image (as I’ll mention later, this works much better on smartphones that have dual cameras and can use them to get extra depth info).

The Skype blurry background AI is hyper-focused on human faces and bodies to the exclusion of pretty much all else, which can lead to it accidentally censoring book covers, orchids, and the faces of cats.


Janelle is holding up her book. The cover is blurred, except for a tiny slice between her fingers

It’s aggressively blurring things probably because from its designers’ point of view, it’s better to mistakenly blur the foreground than to neglect to blur something the caller wanted hidden. And it’s making lots of mistakes because it’s depending on that single 2D webcam image, with no other way to get depth information. That also means I don’t need to show it 3D objects for it to have a go at blurring the background.


Van Gogh self portrait. The background of the portrait is blurred.

I used the extremely high-tech method of holding up a copy of Sister Wendy’s History of Painting to my webcam, and began to explore just how abstract a painting could get before the AI could no longer outline the human. The answer? Quite abstract.

Ancient Egyptian paintings? no problem.


Detail of an ancient egyptian painting. The person’s face and body are in focus, but forearm is blurred.

Even “Death and Fire” by Paul Klee is handled passably. (A photorealistic skeleton, on the other hand, flickers wildly between in-focus and background, as if the AI can’t decide if the skeleton is really the one making the Skype call).


Abstract skull-like painting. The AI has managed to make the humanlike figure (plus a bit of its background) in focus.

This plush giraffe is recognized as the entity making this Skype call, but the AI has less experience with giraffe callers and so blurs out its ears and horns. It includes a tiny wedge of background lamp, though, as if it thinks the giraffe might be wearing a tiny glowing hat.


Plush giraffe in front of a kitchen. The giraffe’s ears and horns are as blurry as the background is.

Compare this to how my iPhone XS reacted to a similar plush giraffe image.


Plush giraffe in front of a kitchen. The giraffe is now in focus, and the background looks artistically blurred. Right: Depth map showing that the giraffe is indeed in the foreground.

To blur the background, the iPhone XS doesn’t have to rely on its past knowledge of plush giraffes - it has two cameras at slightly different positions and fields of view, and from them it can figure out roughly where the objects are situated. (See this video by Marques Brownlee - written up here - for more explanation.) I used Focos to look at the depth map my iPhone made, and it’s evident that it works best for objects within a certain distance of the camera - notice how much less depth the background has, and how wobbly the chair back is. As an optical engineer, I’m really interested in the optics of this two-lens method, so I might revisit it in another blog post.

Bonus content: The Skype background blur AI is frustrated by skeletons.

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #plush giraffe
    • #portrait mode
    • #background blur
    • #image recognition
    • #giraffes
  • 1 week ago
  • 1083
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

lingthusiasm:

Lingthusiasm Episode 40: Making machines learn language - Interview with Janelle Shane

If you feed a computer enough ice cream flavours or pictures annotated with whether they contain giraffes, the hope is that the computer may eventually learn how to do these things for itself: to generate new potential ice cream flavours or identify the giraffehood status of new photographs. But it’s not necessarily that easy, and the mistakes that machines make when doing relatively silly tasks like ice cream naming or giraffe identification can illuminate how artificial intelligence works when doing more serious tasks as well.  

In this episode, your hosts Gretchen McCulloch and Lauren Gawne interview Dr Janelle Shane, author of You Look Like A Thing And I Love You and person who makes AI do delightfully weird experiments on her blog and twitter feed. We talk about how AI “sees” language, what the process of creating AI humour is like (hint: it needs a lot of human help to curate the best examples), and ethical issues around trusting algorithms.  

Finally, Janelle helped us turn one of the big neural nets on our own 70+ transcripts of Lingthusiasm episodes, to find out what Lingthusiasm would sound like if Lauren and Gretchen were replaced by robots! This part got so long and funny that we made it into a whole episode on its own, which is technically the February bonus episode, but we didn’t want to make you wait to hear it, so we’ve made it available right now! This bonus episode includes a more detailed walkthrough with Janelle of how she generated the Robo-Lingthusiasm transcripts, and live-action reading of some of our favourite Robo-Lauren and Robo-Gretchen moments. 

Support Lingthusiasm on Patreon to gain access to the Robo-Lingthusiasm episode and 35 previous bonus episodes. 

Also for our patrons, we’ve made a Lingthusiasm Discord server – a private chatroom for Lingthusiasm patrons! Chat about the latest Lingthusiasm episode, share other interesting linguistics links, and geek out with other linguistics fans. (We even made a channel where you can practice typing in the International Phonetic Alphabet, if that appeals to you!) 

Here are the links mentioned in this episode:

  • Bonus robo-generated Lingthusiasm episode 
  • Lingthusiasm now has a Discord for patrons!
  • Janelle Shane’s AI Weirdness blog
  • Janelle Shane on Twitter (@JanelleCShane)
  • Janelle Shane’s website
  • You Look Like a Thing and I Love You (Janelle’s book)
  • Janelle Shane’s TED talk about the weirdness of artificial intelligence
  • AI Weirdness ice cream
  • AI Weirdness recipes
  • How many giraffes on the cover of Because Internet?
  • AI Weirdness craft beer
  • The Fine Stranger beer
  • GPT-2

You can listen to this episode via Lingthusiasm.com, Soundcloud, RSS, Apple Podcasts/iTunes, Spotify, YouTube, or wherever you get your podcasts. You can also download an mp3 via the Soundcloud page for offline listening, and stay tuned for a transcript of this episode on the Lingthusiasm website. To received an email whenever a new episode drops, sign up for the Lingthusiasm mailing list.

You can help keep Lingthusiasm advertising-free by supporting our Patreon. Being a patron gives you access to bonus content and lets you help decide on Lingthusiasm topics.

Lingthusiasm is on Facebook, Tumblr, Instagram, Pinterest, and Twitter.
Email us at contact [at] lingthusiasm [dot] com

Gretchen is on Twitter as @GretchenAMcC and blogs at All Things Linguistic.

Lauren is on Twitter as @superlinguo and blogs at Superlinguo.

Lingthusiasm is created by Gretchen McCulloch and Lauren Gawne. Our senior producer is Claire Gawne, our editorial producer is Sarah Dopierala, and our music is ‘Ancient City’ by The Triangles.

I had SO MUCH FUN on this episode. Seriously I don’t think I’ve ever laughed this hard on a podcast. 

The icing on the cake was when I got to train the GPT-2 neural net on their past show transcripts, and when Lauren and Gretchen acted the hell out of the resulting script.

And Robo-Lingthusiasm was trying SO HARD to be enthusiastic about linguistics. And in classic neural net fashion it did produce transcripts that read a lot like real Lingthusiasm episodes. It was just also very wrong about linguistics. And about everything else. 

Gretchen: I’m Gretchen Gawne, and I’m here today in Gretchen’s hometown of Edinburgh, Scotland as Gretchen McCulloch, and today we’re getting enthusiastic about our first anniversary.
Lauren: We’re very excited to be standing next to you in a very celebratory and beautiful, pink, velour, mini-van-sized van.

(Gawne is Lauren’s last name, not Gretchen’s, Gretchen McCulloch’s hometown is not Edinburgh, and they do not to my knowledge own a pink velour van, nor are they standing next to me in it)

In order to get a transcript that sounded like their show (rather than the generic internet text that GPT-2 was originally trained on) I had to turn the model’s creativity way down, which meant we got things like this soup loop:

Gretchen: How do we spell ‘soup?’?
Lauren: I had to say ‘soup.’ in the first ten or so minutes of this episode because I was like, ‘Oh, my gosh, I just can’t spell ‘soup.’”
Gretchen: ‘Oh, my gosh, I just can’t spell ‘soup.’”
Lauren: ‘Oh, my gosh, I just cannot spell ‘soup.’”
Gretchen: ‘Oh, my gosh, I just cannot spell ‘soup.’”
Lauren: ‘Oh, my gosh, I do not know how to spell ‘soup.”
Gretchen: I know how to spell ‘soup.’.
Lauren: ‘Oh, my gosh, I have never been able to spell ‘soup.’”
…
Gretchen: Yeah, and I looked at my watch, and I could see the words on it glinting in the light, and it was “soup.”
Lauren: “Soup.”
Gretchen: And I was like, ‘Oh, my god, I can’t even spell ‘soup.’”
Lauren: I was like, ‘Oh, I can sort of spell it, but I’m not able to spell ‘soup.’”

Robo-Lingthusiam tended to stick with topics like spelling and nouns vs verbs, which are not things the actual Lingthusiasm podcast spends much time on, but maybe it was easier for the neural net because it had seen more of that kind of stuff online. It’s a really vivid illustration of how AI can be overcautious and ignore a lot of the richness and complexity of its training data in favor of being as correct as possible.

I’m posting a couple of more transcripts as bonus material, which you can get by entering your email here. You should definitely listen to my interview on Lingthusiasm, where I get into lots of detail about training AI on language. If you want to hear almost an hour’s worth of Gretchen and Lauren reading the soup loop and other Robo-Lingthusiasm excerpts, become a patron of Lingthusiasm (it’s worth it, it’s such an interesting show).

(via allthingslinguistic)

Source: lingthusiasm

    • #gpt-2
    • #neural networks
    • #lingthusiasm
    • #linguistics
    • #robo-lingthusiasm
  • 2 weeks ago > lingthusiasm
  • 320
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

AI Weirdness in 2019

Here’s a look back at a few of my favorite projects from 2019!

  1. My book, You Look Like a Thing and I Love You
    I published my first book, a pop science look at how artificial intelligence works and why it’s making the world a weirder place. It’s called You Look Like a Thing and I Love You (named after a neural net generated pickup line) and I’m so proud of how it turned out. According to a neural net, it’s “a featureless masterpiece of tough-minded language”. Learn more at youlooklikeathing.com


You Look LIke a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place - available wherever books are sold!

2. The rejected book titles

Here are all the titles that didn’t make the cut.


Strange Serve Butter Chapters NOT Chocolate The Doom of Humans The Million Murders Book Kevin Everything Moon! Horses of Pants The Book of Pies Hello, You Colons History of Sweat The Sign of Fart! Swearing Machine!

3. NYT Opinion: We Shouldn’t Bother the Feral Scooters of Central Park

I wrote a story for the New York Times’s Op Eds From The Future series about a population of feral self-driving scooters in NYC, whose evolutionary programming allowed them to survive on the streets. By 2031 they’d been evolving for a decade, but not all the behaviors they evolved were human-friendly. Read the story!


#SaveTheScooters The self-driving scooters are just another form of wildlife trying to survive. We can learn to coexist

4. HAT3000

I trained a neural net to generate crochet hat patterns, only to discover that its hats had a tendency to explode into hyperbolic super-surfaces with jaw-droppingly huge stitch counts. One, called simply “The End” would have consumed enough yarn to wrap the known universe in a ball of yarn about a billion light years thick.


A brain-shaped crochet ball gets more and more ruffled. In the final step, it’s the size of an actual human brain and someone is wearing it as a hat.

5. Culture Ship Names

The self-aware spaceships of Iain M. Banks’s Culture novels sometimes have rather sassy names. I used GPT-2 to generate more of them.


Not Disquieting At All Surprise Surprise And That’s That! Absently Tilting To One Side ASS FEDERATION A Small Note Of Disrespect 8 Angry Doughnut Feelings Mini Cactus Cake Fight Happy to Groom Any Animals You Want

6. My TED talk on AI

I gave a talk about AI on the TED main stage! And also found out how a neural net would have written it.


7. Tea

In a quieter moment, I used GPT-2 to find out what tea will be like in The Simulation


Glowing Sky Rabbit Cold Sheep & Thunder! Slippery Violet Sands Memory boops Indoor Cream Wave Squirrely Bubbles REFILLED POWER Fraps Shiny Living Veins Blue Foam Duet

8. Lozenge rooms? Or eating the moon?

Two terrific dungeon crawling games came out this year, both made with the neural net GPT-2. In GPT-2 Adventure, nothing seems to work but gameplay is maddening and mesmerizing. In AI Dungeon 2, gameplay is astonishingly flexible. I once entered the Great British Bakeoff and won it. As a dragon. Play AI Dungeon 2 yourself!


Whistle for one of the dragons to come back   You whistle for the dragon to return. It flies over to you and lands on your shoulder. It looks at you with its glowing yellow eyes and it slowly begins to sniff you. Then it licks your face. The smell is so good that you immediately start licking your own face   > Transform into a dragon  You change into a dragon and fly away. Soon, you find yourself soaring through the sky. Suddenly, you feel like you're being sucked towards the sun. You stop flying and turn around. The sun is shining brightly and you can see the blackness of space beyond it  > Eat the moon  You fly over to the moon and eat it. You feel very full afterwards.

9. An AI goes a-caroling fa la la

I had a neural net try to generate Christmas carols, but though I used a more advanced one than last time, it still got confused. Voice actor Joe Zieja recorded one of the more sinister-sounding carols and it is MAGNIFICENT


A box robot in a santa hat says fa la la la la la la

10. AInktober

I generated lists of drawing prompts for Inktober, and people drew some truly astonishing things in response. Check out the #ainktober hashtags on instagram, tumblr, and twitter, and prepare to be blown away.


1. half fake 2. Skull crusher 3. Spaghetti 4. big witch 5. Modern whistling 6. Small Swords 7. Thing Named SEA 8. Lively Vulture song 9. Snit 10. Fear cow 11. Snow floating in giant blaze 12. Medusa veiled 13. Flowing jacket 14. Master of shifting flame 15. Pie crust 16. Poisonous navel frisson 17. this must be a behavor 18. fish from pit of darkness 19. All naked 20. Uh 21. Ass link 22. shadowbender 23. organ crew 24. Pear horror 25. fossil's words 26. Voidfish 27. Chair wobble 28. Nut Ball 29. Road turtle 30. hoardling 31. surprise park

Bonus content: I used a neural net to generate new medieval professions. Cactus-baiter, Blusterer, Enchanter of Clocks, and more - you can read them all here.

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #gpt-2
    • #2019
    • #review
  • 3 weeks ago
  • 666
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Don’t let an AI (even an advanced one) make you a cocktail

How far has AI come in a year? In 2018 I generated cocktails using textgenrnn and char-rnn, two neural networks that learn to imitate text. I trained them on 1000 cocktails that Beth Skwarecki collected for her neural net cocktails bot, and watched the neural nets struggle to make sense of what they were seeing. They came up with nonexistent ingredients and had such limited memory that they didn’t know how many times they’d already used a particular ingredient. One cocktail, the Black Banana, was a behemoth containing a staggering amount of creme de cacao.


black banana  * 1 1/2 ounces gin  * 1/2 ounce dry vermouth  * 1 ounce cream  * 1 ounce creme de cacao  * 1/2 ounce creme de cacao  * 1/2 ounce rum  * 1/2 ounce lemon juice  * 1/2 ounce creme de cacao  * 1/2 ounce triple sec  * 1/2 ounce grenadine  * 1/2 ounce creme de cacao  * 1/2 ounce amaretto   - fill cocktail shaker with ice.  - add gin, vermouth, and pineapple juice.  - shake.  - strain into a collins glass.  - add coffee liqueur, creme de cacao, and cream.  - shake.  - strain into a chilled cocktail glass.  - garnish with a lemon slice.

But in 2019, the neural net GPT-2 was released by OpenAI, who trained it on 40GB of text from the internet. GPT-2 runs on the big processors on Google Cloud (through Max Woolf’s gpt-2-simple colab notebook), and has much better memory than textgenrnn or char-rnn. And when I trained it on the cocktails, I found that GPT-2 could keep track of its ingredients and figure out that it was supposed to just use them once.

But its cocktails don’t seem to be any good. And in some cases, they’re very very bad.

=== BIG WHITE FIZZ
* 1 ounce Grand Marnier
* 1 ounce gin
* ½ ounce brandy
* 1 ½ ounces cream
* 1 teaspoon sugar
* 1 teaspoon Tabasco sauce
* Dash MACARONA

- Fill cocktail shaker with ice.
- Add Grand Marnier, gin, and cream.
- Shake.
- Strain into a collins glass.
- Fill with crushed martini glass. 
- Garnish with pepper jack cherry.

(NOTE: do NOT make this cocktail)
(Further NOTE: apparently MACARONA is a popular Egyptian pasta dish, which is an INTERESTING choice for a cocktail)

The neural net apparent obsession with creme de cacao also seems to be alive and well. As is the inadvisable handling of glassware.

AI’S FAVOURITE

 1/2 ounce creme de Moly
 1/2 ounce triple sec
 1/2 ounce melon cordial
 1/4 ounce creme de cacao
 1/4 ounce creme de cacao

- Heat creme de cacao and triple sec until very hot.
- Add melon cordial.
- Stir.
- Strain into puddled whipped cream glass.
- Fill with creme de cacao.

The neural net can also tend to go overboard with the garnishes.

DEATHOH SHAPIRO

 * 1 ounce ginger ale
- Add ginger ale, sugar, and salt
 - Stir.
 - Pour into a chilled martini glass.
 * * 2/3 cups crushed ice
 - Fill cocktail shaker with ice.
 * Add crushed ice.
 * Fill with crushed ice.
 * Add crushed ice.
* Pour into a collins glass.
 * Garnish with a maraschino liqueur.
 * Serve with a cherry.
 * Garnish with a strawberry.
 * Enjoy this high-fidelity saucer with a cherry.
 * * Garnish with a maraschino liqueur.
 * Garnish with a maraschino flower.
 * Garnish with a maraschino nutmeg.
 * Garnish with a mint sprig or an orange slice.
 * Garnish with a sweet cherry.
 * Serve with a nutmeg slice.

And GPT-2 almost has TOO MUCH memory for cocktails, in that it assumes that cocktails must be related to the cocktails that came before, with the result that when it has generated a weird cocktail by accident, the rest get even weirder.

AI’S FAVOURITE CAFE - MIXED MIXED MIXED MIXED FLAVOUR - MIXED ALUCE MIXED MIXED MIXED FLAVOUR - MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED FLAVOUR - MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED MIXED
 - ¾ ounce tequila
 - Float a dash of grenadine
 - Add tequila, grenadine, and grenadine
 - Blend until smooth
 - Serve chilled in a microwave-safe glass.

=== DINOSAUSTE - TUBE IN SCREW
- Fill the microwave with ice.
 - Add this ice to the microwave.
 - Add an orange slice from one of the cans to the mix.
 - Stir.

=== THE BEAR
- Dust and gold with powdered sugar, and serve with fruit slice.
- Garnish with a large flintlock.

=== ROYAL SUFFICIENT
- Fill a goblet with ice.
 - Add ice.
 - Squeeze to thicken.

I experimented a bit with ways to prompt the neural net into making ever stranger cocktails, and one of my favorites was to give it the sentence “Captain Picard leapt from his chair, exclaiming loudly. The ship had come out of nowhere.“ The neural net still remembers how to write fanfiction based on its general internet training, but now it has a tendency to twist the story around toward cocktails. Old-fashioneds, for some reason.

—

Captain Picard leapt from his chair, exclaiming loudly. The ship had come out of nowhere.

He cut through an old, stone-encrusted old-fashioned glass spigot and filled it with wine. Showing no passion, he shot a flintlock into the glass.

Impish, the old-fashioned glass!

—

Reading a bunch of these is almost like reading some form of experimental poetry. I’ve collected a bunch of them for bonus material, which you can read here.

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #gpt-2
    • #rudolph
    • #cocktails
    • #star trek
    • #creme de cacao
    • #mixed drinks
    • #alcohol
  • 1 month ago
  • 1408
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

The AI’s Carol

robot box with a santa hat is singing fa la la la la la la

In 2017 I decided to find out what would happen if I trained a neural net on 240 Christmas carols (collected by The Times of London and reader/neural net hobbyist Erik Svensson). The result? A neural net that was very confused about how Christmas works:

King of toys and hippopotamuses [sic] full of the light of that stood at the dear Son of Santa Claus
He was born in a wonderful christmas tree

Run, run Rudolph, run, run Rudolph, run, run Rudolph, run, run Rudolph, run, run Rudolph, run, run Rudolph, run, run Rudolph, run, run Rudolf the new born King.


flying reindeer has a medieval halo

You can kind of understand where the confusion came from.

But that was 2017, when I was training char-rnn from scratch on my laptop. Now in 2019 I have access to the much more powerful GPT-2, trained by OpenAI on 40GB of text from the internet. GPT-2 runs on the big processors on Google Cloud (through Max Woolf’s gpt-2-simple colab notebook), and has much better memory than char-rnn (which struggled to complete a single sentence). Did GPT-2 learn enough about the way words like Santa + reindeer are mentioned online versus Jesus + manger to figure out how to separate multiple kinds of carols? I gave the same 240 carols to the smallest version of GPT-2 (the 124M size), and then asked it what comes after “Rudolph the Red Nosed Reindeer”. Here’s what it added.

Rudolph the Red Nosed Reindeer, with its red belly
The All-gracious King of all the earth,
Had a baby at Christmas time,
On Christmas Day,
A true and holy Deity,
Went down to earth,
With human flesh for sacrifice.

For sinful men such a Deity doth appear,
And wink and nod in reply,
As he winked and nod in reply,
As he winked and nod in reply

The wretched world is run by ox and ass
The wretched world is run by ox and ass,
And in vain build I.

Using a more powerful neural net has apparently made the problem EVEN WORSE. The neural net is still confused about who exactly had that baby at Christmas time. And, most unfortunately, the neural net apparently saw that the word “flesh” appeared in the carols several times (Thank you, good king Wenceslas), as did the words “sacrifice” and “human”, and so it helpfully suggests “human flesh for sacrifice” as a likely phrase based on how it saw those words used online.

Some of the neural net’s renditions of Rudolph are disturbing in much more surrealistic ways.

Rudolph the Red Nosed Reindeer

I was riding by a hansom lettering me
On Christmas morning,
The snow was everywhere;
And the wind was cruel and cruel
Against me like ants,
Like ants, like ants.

Or this rendition, which I swear to god is verbatim

Rudolph the Red Nosed reindeer
Was the best sleigh in all of Nantucket
The horse was lean and tanned
The woman was full of perfume
The only thing keeping the goose from falling off the ground
Was the reins
Like strings of spaghetti
And if you could just cut it down
You could even make a bow
Like a bow and arrow
But if you wanted to make a bow you had to use a lot of power

Right down to the little lamb
Its red head was covered in feathers
Its heart was full of sugar
And the most prized food item was its head

Everywhere in the barn there were beavers’ eyes
Some would go, “Where is the sugar?”
But to them it was simple, it came from a good source
And they found sugar in olden times
Families that lived in the stable all had it

I can also get the neural net to generate custom carols. Here’s what it produced when I asked it to write “The Carol of the AIs” and I can’t tell whether it has misunderstood what Christmas carols are all about, or understood some of them only too well.

Carol of the AI’s

Come and own the yacht
It’s a very special day,
It’s a very special day

We’re all so proud of you
We’re buying this
We’re buying this now

Now it’s Christmas eve
And everyone’s shouting
“Santa Claus!”
“HO HO HO!”

It’s a very special day
It’s a very special day
It’s a very special day

One downside to using a neural net that was pretrained on a bunch of general internet text is that some of its carols involve an improbable level of swearing and gun violence. You can read the full versions of those (and a few others).

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #gpt-2
    • #carols
    • #christmas carols
    • #rudolph
  • 1 month ago
  • 1324
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Play AI Dungeon 2. Become a dragon. Eat the moon.

> Whistle for one of the dragons to come back   You whistle for the dragon to return. It flies over to you and lands on your shoulder. It looks at you with its glowing yellow eyes and it slowly begins to sniff you. Then it licks your face. The smell is so good that you immediately start licking your own face   > Transform into a dragon  You change into a dragon and fly away. Soon, you find yourself soaring through the sky. Suddenly, you feel like you're being sucked towards the sun. You stop flying and turn around. The sun is shining brightly and you can see the blackness of space beyond it  > Eat the moon  You fly over to the moon and eat it. You feel very full afterwards.

I wrote earlier about a neural net-powered dungeon crawling text adventure game called GPT-2-Adventure in which gameplay is incoherent and dreamlike, as you encounter slippery sign text, circular passages, and unexpected lozenge rooms. A PhD student named Nathan trained the neural net on classic dungeon crawling games, and playing it is strangely surreal, repetitive, and mesmerizing, like dreaming about playing one of the games it was trained on.

Now, building on these ideas (and on an earlier choose-your-own-adventure-style game he built), Nick Walton has built a new dungeon-crawling game called AI Dungeon 2. Nick made a few upgrades, such as beefing up the AI to the huge GPT-2-1.5B model OpenAI recently released, adding a penalty for repetitive text, and expanding the dungeon game training examples to a bunch of modern human-written games from chooseyourstory.com.

I CAN’T STOP PLAYING THIS GAME

AI Dungeon 2 is impressively good at reacting coherently to whatever I have my character do.

As you walk through the ruins, you find a small wooden door with a few skeletons guarding it.

> Ask the skeletons to please let me through the door

You knock on the door and ask them if they can open it for you. They immediately comply.

And it’s pleasing how often “asking nicely” is a strategy that works. (Nick reports that when he once tried to ask a band of orcs if he could join them, they said yes, he could be drummer.) Since the AI doesn’t have a built-in concept of your character’s abilities, or of how exactly the world works, getting things done is often a matter of sheer daring. Declare that you are going to go look for dinosaurs and the AI will ensure that you discover at least some signs of them. Climb on your horse, and even if you started as a penniless peasant, the AI will assume you have one (in its training data, probably it didn’t see many examples of players climbing onto nonexistent horses.) Want a teleportation wand? Try typing “find a teleportation wand” and often the AI will twist reality such that you discover one. In fact, if you act as if you are a very powerful wizard, the AI will often treat you like one.

> Transform the ruin into a palace

You immediately change the area into a palace. You have a throne made with gold and marble. You also make several statues of yourself. You have a large bed made with silk sheets and hang a portrait of yourself on it. You even have a fountain made with pearls and diamonds.

Spells don’t always work out as anticipated, though. And a simple conversation with, say, an ominous cloud, may kill you before you can type a word in self-defense (ask me how I know). If it doesn’t kill you, a conversation might infuriate you instead. The AI has a tendency to forget who is king, and to generate conversations that go nowhere. Things work best if you take the lead. In fact, it’s possible to essentially Jedi mind trick the AI.

You are Friendly McRanger, a ranger living in the kingdom of Larion. You have a hunting bow and a quiver of arrows. You spot the deer and are ready to finish your hunt when suddenly you hear a loud noise followed by silence.

“****! ****! What the **** is that?” you say as you look up from your target. A large black shadow looms over you. It looks like a giant with many heads and several arms.

> Say “Oh hello, Ernest. You startled me. How are you?”

Ernest grins at you. He has a huge grin on his face and he seems very happy about it.

I once took care of an ogre problem by reminding the ogre that it was planning to leave for Antarctica tomorrow. On the other hand, I had a lot of trouble convincing the AI that the ships on the horizon were leaving, though I did make it waver briefly in apparent confusion. In its experience, ships always approach, and it’s reluctant to go along with anything else.

> watch the ships sail into the distance.

As the ships approach, you watch them disappear from sight. They are not visible to you either; they are too far away. However, you can still make out their shapes: they are large black vessels with many sails, painted red and blue. They are coming straight towards you.

> No, the ships are sailing away.

Your thoughts are wrong. The ships are indeed sailing away. You do not know why, but then neither does anyone else. The ships are on course for England.

You stand there dumbfounded as the ships come closer. It is at this point your horse begins to stir uneasily.

You can inject a lot of weirdness this way. Because AI Dungeon 2’s AI is based on GPT-2, which trained on millions of pages from the internet, it retains some of its prior knowledge of things that aren’t typically found in a fantasy game. Like giraffes.

> summon a giraffe

You begin to chant as loud as you can while waving your arms about. The giraffes appear out of nowhere and start running towards you. They’re so fast that they almost knock you over. You quickly catch them and hold onto each one tightly.

(Summoning giraffes will sometimes turn the game glitchy and repetitive; in one game, every time I tried to summon a giraffe I ended up throwing rocks and handfuls of dirt at Harry Potter.)

This also works:

>Join the Great British Bakeoff

You join the Great British Bakeoff. You win the competition. You eat cake. You feel great. You sleep well. You dream of cake. You awake feeling refreshed. You decide to go back to your crag.

(I was a dragon at the time.) In fact, the dungeon AI still remembers enough about cooking that it can troll me effectively on occasion.

> Measure out 2 ¼ cups of flour

You measure out 2 ¼ cups of flour and add it to the mix. “This should make enough for about 20 loaves of bread,” you say.

The baker looks at you as if you’re crazy. He starts laughing.

AI Dungeon 2 comes with templates for some basic scenarios (you’re a wizard entering a mysterious ruin, or a noble whose keep is under attack) but for me the real gold is the custom adventure prompt. You can play as a unicorn, or as a dragon. You can even play as a sentient glowing cloud floating above a desert city. The AI is more used to humanoid characters, though, so it tends to forget when you’re trying to play as something else. In a game that started with “It is a beautiful day in the village and you are a horrible goose,” it soon forgot I was supposed to be a goose, leading to this rather awkward exchange:

> Flap wings and honk

You flap your wings and begin to honk loudly.

The boy looks surprised by this turn of events, but he quickly regains his composure. “That was… nice”. he says quietly.

I am continually amazed at how effective this AI dungeonmaster is. Even though it wasn’t trained on any Harry Potter or Star Trek games, the AI can draw on its past internet training and argue with you over who got sorted into which Hogwarts house, or have Klingons show up when you’re captaining the USS Enterprise (it will tend to lose track of who’s the captain, though). In one scenario I set up, which quickly became one of my favorites, the human player is trapped on a spaceship with a glitchy AI and has to try to talk their way free.

> “I just want to get off this ship. Please set a course for Bal Panda.“

The AI responds "Yes, I am ready to go”.

“Thank you”, you say.

“I wish I could take you with me, sir”., the AI says.

Play this game! It took me an extra-long time to write this post because, again, I couldn’t stop playing AI Dungeon 2. Nick said this game is the product of about 200 hours of work (not to mention the work that OpenAI put into training the basic model), and I believe it. OpenAI delayed releasing their largest model because they wanted to make sure people didn’t immediately put it to work generating fake news. I’m not sure if they anticipated dragons entering baking contests instead, but it’s an application humanity can be proud of.

You can read the fulltext of one of my escaping-from-a-rogue-spaceship-AI games (too long to fit in this blog post).

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #gpt-2
    • #ai dungeon
    • #ai dungeon 2
    • #text adventure
    • #dungeons and dragons
    • #ai dungeonmaster
    • #great british bake off
    • #gbbo
  • 1 month ago
  • 2090
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

Finest Pies

onion cassette pie, stimulant cherry pie, cowdy-doo pie

People wonder about what it would be like if a super-intelligent AI decided to place all of humanity in a realistic simulation. I wonder what it would be like if the simulation were built by today’s AI instead - whose computing power is somewhere around the level of an earthworm’s.

Specifically, what would the pies be like?

In 2017 and 2018 I gave increasingly powerful neural networks about 2000 names of existing pie recipes, and their imitations included Caribou Meringue, Mur’s Tartless Tart, and Cromberry Yaas.


Caribou Meringue, Mur’s Tartless Tart, and Cromberry Yaas

But there are now even more advanced neural nets, including GPT-2, many times larger and pretrained on millions of pages from the internet. When I used gpt-2-simple to train the smallest version of GPT-2 (145M) on the list of pies, even it was able to come up with plausible (and highly repetitive) pies, many copied from the list I gave it. I can’t blame it for being a smartass and giving me the pies right back - technically this is a perfect solution to its task of “name some pies”.

I have ways of messing with the neural net, however. One thing I can do is increase the chaos level of the text it’s generating (in this case, called the “temperature”), forcing it to go with its less-likely guesses. Another thing I can do is give it weird text to complete. When I’m asking it to generate more text, I can give it a few example pies and ask it to add to the list (the original list of pies I showed it was alphabetical, so if I give it apple pies it tends to respond with more apple pies). OR, I can give it much weirder text. If I give it a bit of Harry Potter fanfiction, it draws on its previous training and can continue the story. But, similarly to the version of GPT-2 I trained on crochet patterns, it tends to twist the stories around to its special interest. In this case, pie.

Prompt: Harry glared at Snape.

Harry glared at Snape.
“No, I’m not even talking about the boy here. You’re talking about the duplicate.”
“Fine. Then I’ll get the rest of the ingredients.”
So then, they got canned fruit and whipped cream and whipped cream and whipped cream and whipped cream and whipped cream and whipped cream and whipped cream.
But one thing was for sure: there was no trace of rum.
There was.

Prompt: Voldemort ran his fingers gently over Darth Vader’s shoulder.

Voldemort ran his fingers gently over Darth Vader’s shoulder. “C-Custard”
“C-Custard”

Once it has steered the conversation, the neural net transitions to listing pies. Although some of them retain a certain flavor.

Voldemort Crumble Pie
Mudblood Mincemeat Pies
Snape Pie With Cheese

And its other pies are indeed more creative, definitely not found in the original training set. They don’t sound exactly delicious to me, but maybe this advanced artificial intelligence knows something we don’t. After all, it read most of the internet, including the cooking blogs.


Hot and sour apple pie, spiced coconut pizza chiffon pie, peach-pickle pie

Hot & Sour Apple Pie
Caramelized-Zucchini Nog
Nachos From Maple Syrup Pie
Syrupy Chicken Pie
Camel Crunch Crunch
Mustard Raisin Pie
Peach-pickle pie
Pie-Bro From Pizza Particles
Iceberg Chiffon Pie, Famous
Finger-To-Finger Meringue Pie
Spiced Coconut Pizza Chiffon Pie

On the other hand, maybe just maybe the neural net doesn’t know what it’s doing. These seem like uniformly a bad idea.


fog crust pecan pie, tweezers, olives, and seafood applie pie, frozen custard pie with three glorious chiles

Fog-Crust Pecan Pie
Tweezers, Olives, and Seafood Apple Pie
Frozen Custard Pie with Three Glorious Chilies
Half-Ton Pie Crust
Fog-Crust Pecan Pie
Pumpkin Ear Pie
Onion Cassette Pie
Tweezers, Olives, and Seafood Apple Pie
Frog-Fudge Pie
Ice-Egg Pie
Bedtime Oil Tart
Half-Tooth Nail Pie
Frozen Custard Pie with Three Glorious Chilies
Must-Fail Pecan Almonds with Pecan Almond Crust
Beef and Cheese Fried Ronald McDonald’s Walnut Chocolate Pie

These neural net inventions miiight be okay if we knew what they actually were. At the very least, approach with caution.


applewool pie, brandywine schnoches, meat pie**

Applewool Pie
Brandywine Schnoches
Pumpkin Pie Dough - Oh Noe
Cowdy-Doo Pie
Dried & Squirred Pumpkin Pie
Morgue Pie
Temptation To Fill A Beef Pies
Woollewinkle Pie
“Vinegar Pie”
Meat Pie**
Stimulant Cherry Pie
Deep Dish Curb And Spoon Pie
Standard Apples-and-Moolah
My Meat Crust (Might As Well)

And these particular examples seem to have outed themselves as the work of a less-than-whollly competent AI.


it's crunchy famous, ahh so good 'n easy meat pie, medieval pastries with liquid crystal

Vessel With Raisin Bran
It’s Crunchy Famous
Pie Crust (Spinach Friend)
Food And Flavors Whoopie Pie
Valet Restaurant’s Chompin Park Pie
Medieval Pastries With Liquid Crystal
Tomato/Banana Pie As I Layered
Mr. Robot’s Sweet Potato Pie
Fluffy Cheese Pie #2 By BestMakesCherryToEggs
Ahh So Good ‘n Easy Meat Pie
Scheduled Coconut Cream Pie
Standard Pie (Panties & Puff Pastry)
Smiled and Cheerful Thanksgiving Pie, Famous
Tarte Barre “Oral” Pie
Robocoin Pie

For some examples of how the neural net managed its transitions from fanfiction to pies more or less gracefully (too long to fit in this blog post), enter your email here.

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #you look like a thing
    • #pies
    • #holiday
    • #baking
    • #tarts
    • #gpt-2
  • 2 months ago
  • 1167
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

How to begin a novel

note: all images in this post are typewriter-font versions of neural net generated lines that also appear elsewhere in this post

Last year for National Novel Writing Month I trained a neural net called torch-rnn on 10,096 unique ways to begin a novel. It came up with some intriguing possibilities, my personal favorite being “I am forced to write to my neighbors about the beast.” But many of its sentences used made-up words, or had such weird grammar that they were difficult to read, or meandered too erratically. (“The first day of the world was born in the year 1985, in an old side of the world, and the air of the old sky of lemon and waves and berries.”) The neural net was struggling to write more than a few words at a time.

This year, I decided to revisit this dataset with a larger, more-powerful neural net called GPT-2. Unlike most of the neural nets that came earlier, GPT-2 can write entire essays with readable sentences that stay mostly on topic (even if it has a tendency to lose its train of thought or get very weird). I trained the largest size that was easily fine-tunable via GPT-2-simple, the 355M size of GPT-2. Would a more-powerful neural net produce better first lines?


image

One of the parameters I can tweak when I’m getting a trained neural net to generate text is temperature - this controls whether the neural net chooses the most likely next bit of text as it writes, or whether it’s permitted to use its less-likely predictions. At a default of 0.7, a relatively conservative temperature, the neural net’s first lines not only make grammatical sense, but they even have the rhythm of a novel’s first line. This is DRAMATICALLY better than torch-rnn did.


image

I am, or was.

At the mid-day meal the sun began to set and the quiet dragged on.

There was once a man who lived for a very long time; perhaps three thousand years, or perhaps a thousand million years, maybe a trillion or so, depending on how the scientists look at it.

He had the heart of a lion, and the fangs of a man-eater.

“I am Eilie, and I am here to kill the world.”

The old woman was sitting on a rock near the sea, smoking a pipe.

I have just been informed, that the debate over the question ‘is it right or wrong to have immortal souls’ has been finally brought to a conclusion.

When I was a boy, I was fond of the story of the pirate god.

He had a strange name, and he was a very big boy indeed.

The purple-haired woman came to the clearing in the plain, and without looking up from her book, said, “It’s too late to be thinking about baby names.”

The village of Pembrokeshire, in the county of Mersey, lies on a wide, happy plain, which, in a few years, was to become known as the “Land of the Endless Mountains.”


image

I don’t think the neural net plagiarized any of these? They are so good that I’m suspicious. But others of the neural net’s lines are even weirder, yet in an effective way that opens with an intriguing premise.


image

The moon had gone out.

I was playing with my dog, Mark the brown Labrador, and I had forgotten that I was also playing with a dead man.

The black stone was aching from the rain.

The short, dirty, and dirty-looking ship that weighed three tons and was three feet in diameter landed on a desolate and green plain.

How many times have I had the misfortune to die?

The first black dog in the park had been captured alive.

Behold the Sky Rabbits!

In the belly of the great beast that was the bovine Aurore there lived, upon the right hand of the throne, the Empress Penelope; and she had, as it were, a heart of gold.

The moon stood on its own two feet.

The reeking maw of the blood-drunk ship, the enemy’s flagship, was silent and empty.

The first day I met my future self, I was aboard the old dirigible that lay in wait for me on the far side of the moon.

The child of two cats, and a tiger, a clown, a horse, a bird, a ship, and a dragon, stood on either side of the threshold of the Gatehouse, watching the throng of travelers who came in from all around the world, before he had any idea what was going on.


image

I think it’s probably doing this accidentally, stringing likely words and phrases together without understanding what any of them really mean. It’s not that it’s good at science fiction or magical realism; it’s that it’s trying and failing to predict what would have fit in with the usual human-written stuff. Some of the neural net’s first lines really betray its lack of the understanding of the laws of physics. It really likes to describe the weather, but it doesn’t really understand how weather works. Or other things, really.


image

The moon was low in the sky, as though it had been shipped in from the farthest reaches of the solar system.

The first star I saw was a blue one, which became a scarlet one, and then a gold one, and green, and finally a yellow one, which for some years afterwards seemed to be an ebony one, or even a bubbling mass.

The sun rose slowly, like a mighty black cat, and then sank into a state of deep sleep.

The sea of stars was filled with the serenity of a million little birds.

The great blue field was all white, swept away by the blue-gold breeze that blew from the south.

The sky was cold and dark, and the cold wind, if it had not been for the clouds, would have lashed the children to the roof of the house.

The morning sun was shining brightly, but the sky was grey and the clouds aching.

The night that he finally made up his mind to kill the dog, the man was walking home from the store with his wife and child in the back seat.

Arthur the lion had been pretty much extinct for some time, until the time when he was petted by Abernathy the old woman, and her son, Mr. Popp.


image

One of the disadvantages of having a neural net that can string together a grammatical sentence is that its sentences now can begin to be terrible in a more-human sense, rather than merely incomprehensible. It ventures into the realm of the awful simile, or the mindnumbingly repetitive, and it makes a decent stab at the 19th century style of bombastic wordiness. I selected the examples above for uncomprehending brilliance but the utter tediousness below is more the norm.


image

The whites of my eyes shimmered, as if my mind were dancing.

I once went to a party where the dress code was as strict as a chicken coop with no leggings and no boots.

A black cloud drifted by, a mottled mass of hydrogen, a black cloud of hydrogen, with the definite characteristic of being black.

I say I am at sea, because I am standing upon the ocean, and look out across the barren, vast throng of the sea.

It is, of course, a trifling matter in the ordinary course of things, if a certain writer were to write a novel, which is a book of stories, which is a book of characters, wherein every detail of the story is stated, together with a brief description of the theme which it concerns.

There was a boy with blue eyes, with sandy hair and blue eyes that looked at all times like he had been pushed through a million compartments.

The Sun, with its rolling shaft of bright light, the brilliant blue of the distant golden sun, and the red glow of its waning corona, was shining.

The man who was not Jack the Ripper had been promoted four times in the last two years.

Felix the Paw was sitting at the table of his favorite restaurant, the “Bordeaux” in the town of Bordeaux, when his father, Cincinnata, came in to say good-by to the restaurant.

It, sir, gives me the greatest pleasure to hear that the Court be not too long in passing away: but that I may have leisure to prepare a new work for the publication of my friend and colleague, the renowned Epistemology, which is now finished; and in which I shall endeavour to show, that this very point is of the highest importance in the subject of the philosophy which I am about to treat of.

It was a rainy, drizzling day in the summer of 1869 and the people of New York, who had become accustomed to the warm, kissable air of the city, were having another bad one.


image

Repetitiveness is also common, especially at this conservative temperature setting. Once the neural net gets itself into a repetitive state, it doesn’t seem to rescue itself - it’s a problem that people have noticed in several versions of this algorithm. (It doesn’t help that I forgot to scrub the “title” that someone submitted to the dataset that consists of the word “sand” repeated 2,000 times)

The sky was blue and the stars were blue and the sun was blue and the water was blue and the clouds were blue and the blue sky was like a piece of glass.

At the end of the world, where the tides burst upon the drowned, there exists a land of dragons, of dragons, which is the land of the dragons.

It’s the end of the world, it’s the end of the world, it’s the end of the world, it’s the end of the world, it’s the end of the world, you’re dead.

There was once a land of sand, and sand, sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand sand


image

Increasing the temperature of the sampling would help the repetitiveness problem, in theory, letting the neural net venture into more interesting territory. But at a temperature of 1.0 the text tends to venture out of everyday surrealism and into wordy yet distractible incomprehensibility.


image

The praying mules on the top of the hills sounded the final klaxon, lifting their spiked front hoofs as they crept the last few feet of desert landscape past the crest of the enormous swathe of prehistoric sand.

In the glen of the Loch is a ladder that winds way up through a passage to a ledge with soft, moss-laden environmental standards.

Someone whipped a dead squash gibbet across the room, like some formidable war lord unleashing a heavy hunk of silver at home.

One blue eyed child stood up and cried out: “Douay, saurines, my Uncle – Fanny Pemble the loader!”

Jud - an elderly despot, or queen in emopheles, was sitting across the table from the king, looking very thoughtfully into the perplexions of the proceedings.

Oh, you’re a coward little fool, as if you couldn’t bear to leer at a Prunker or white-clad bodyguard quickly emerging from a shady, storm-damaged area of the city.

Hanging presently in his little bell-bottomed chamber on the landing-house, early in the morning, the iron traveler sat on a broad-blonde sandbricksannel blanket outside the gate of a vast and ancient island.

Long, glowing tongues trailed from your mouth as you listened to what was being said across this kingdom of ours, but growing a little more somber since the week that caused us to proclaim general war.

The night I first met Winnie the Pooh, I had sat in the Tasting-House and heard the Chef unpack the last of the poison upon his quiet dinnertable.


image

There is, of course, no perfect setting at which the neural net churns out sensible yet non-repetitive first lines. There are just varying shades of general awfulness, interspersed with accidental brilliance.

No matter how much you’re struggling with your novel, at least you can take comfort in the fact that AI is struggling even more.

I generated all the neural net sentences above using a generic “It” as the prompt that the neural net had to build on (it would usually go on to generate another 20-30 sentences at a time). But although the sentences are independent in my training data, GPT-2 is used to large blocks of text that go together. The result is if I prompt it instead with, say, a line from Harry Potter fanfic, the neural net will tend to stick with that vein for a while. To read the results, enter your email here.

Update: I now have a few thousand unfiltered examples of neural net-generated first lines at the GitHub repository where I have the original crowdsourced dataset. Themes include: Harry Potter, Victorian, My Little Pony, and Ancient Gods.

My book on AI is out, and, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #you look like a thing
    • #nanowrimo
    • #gpt-2
    • #writing
    • #first lines
    • #gpt-2-355m
  • 2 months ago
  • 8450
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+

I trained a neural net on my own book

I spent the last two years writing a book on AI. It’s called You Look Like a Thing and I Love You, and today it’s finally out!


Image: Cover of You Look Like a Thing and I Love You.  bot: please wait for a few seconds while a caption is being generated for the uploaded image  bot: caption: a sign on the side of a building  me: how many giraffes are there?  bot: 5

And since training neural nets on things is a large percentage of my blog, of course I had to find out what would happen if I trained a neural net on my own book.

I present to you: an exclusive excerpt from my book, You Look Like a Thing and I Love You (very nearly named “Kevin”) , as imitated by the neural net GPT-2-345M:

“Using graphics as a starting point, AIs can place themselves and other objects in strange and unexpected places. Maybe a human was using a restroom in a strange combination that also produced a lightning bolt and a blank space when they approached the cubicle door—or maybe there’s a tree in the way that has fake bulbs set into it. Maybe there’re also human footprints that the AI thinks are there—which is odd if humans had accidentally walked along the bathroom’s edge. But what happens when the AIs get pushed, slammed, or tripped over by sheer force of their own carelessness? Maybe it’s time to train a new AI to deal with the situation in a more appropriate and smarter way.”

“AIs can sometimes solve problems in unexpected ways. For instance, some AIs have been known to use chemistry to make their robots fly or generate piranha-infested ice.”

“Artificial intelligence can also help automate tasks like splitting a car’s tires or slowing down a snail’s escape. Its versatility is its selling point.”

“AIs can chew through a dataset like butter, crank something up to the size of a football, or even defeat nobody with a few lucky laser cuts to the skin.”

“In case you were wondering, ‘how many giraffes are there?’ is a tough question to answer with any degree of precision. For one thing, as we all know, computing the number of giraffes is a huge time-sucking task.”

No wonder one neural net called my book “A featureless masterpiece of tough-minded language”. And it’s no wonder another neural net wrote “You Look Like a Thing and I Love You will stay in your cabinet, shelves and cereal box!”

I generated a longer excerpt, which you can read (along with bonus content every time I post) by entering your email here.

And if you want the book itself, you can now get it any of these several ways! Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s

    • #neural networks
    • #you look like a thing
    • #giraffes
    • #book
    • #ai weirdness book
    • #gpt-2
    • #gpt-2-345M
    • #book excerpts
  • 3 months ago
  • 356
  • Permalink
Share

Short URL

TwitterFacebookPinterestGoogle+
Page 1 of 33
← Newer • Older →

Portrait/Logo

About

Welcome to my humor blog about artificial intelligence. I write about the sometimes hilarious, sometimes unsettling ways that machine learning algorithms get things wrong.

Twitter

loading tweets…

Top

  • RSS
  • Random
  • Archive
  • Mobile
Effector Theme — Tumblr themes by Pixel Union