Google Bard has the ability to describe images. But it turns out what you get depends a lot on how you ask.
I gave Bard this image and the prompt "Please describe this spooky Halloween scene". On the right is the image I got when I took the
Since 2019 I've generated October drawing prompts using the year's most state-of-the-art text-generating models. Every year the challenges are different, but this was one of the hardest years yet. Large language models like chatgpt, GPT-4, Bing Chat, and Bard, are all tweaked to produce generic, predictable
ChatGPT, Bard, GPT-4, and the like are often pitched as ways to retrieve information. The problem is they'll "retrieve" whatever you ask for, whether or not it exists.
Tumblr user @indigofoxpaws sent me a few screenshots where they'd asked ChatGPT for an explanation of
On July 31, 2023, a giraffe with no spots was born at Brights Zoo in Tennessee. She's a uniform brown with pretty white highlights around her face and belly, like a Jersey cow or a white-tailed deer.
Image recognition algorithms are trained on a variety of images from
A reader wrote in a while ago with a suggestion: they were about to have a baby and wondered if I could use AI to come up with some new ideas for baby onesies. I can't find the letter any more, and I don't remember how
I've noted before that because AI detectors produce false positives, it's unethical to use them to detect cheating.
Now there's a new study that shows it's even worse. Not only do AI detectors falsely flag human-written text as AI-written, the way in
Some of the recent image-generating models have this thing where they can fill in the blank parts of images. It's handy when you want to show them exactly how to give you more of the same. Like these animal emoji. See if you can tell which ones I
ChatGPT text can sound very knowledgeable until the topic is something you know well. Like tic-tac-toe.
Once I heard that ChatGPT can play tic-tac-toe I played several games against it and it confidently lost every single one.
Part of the problem seemed to be that it couldn't keep
I'm interested in cases where it's obvious that chatbots are bluffing. For example, when Bard claims its ASCII unicorn art has clearly visible horn and legs but it looks like this:
or when ChatGPT claims its ASCII art says "Lies" when it clearly says
Large language models like ChatGPT, GPT-4, and Bard are trained to generate answers that merely sound correct, and perhaps nowhere is that more evident than when they rate their own ASCII art.
I previously had them rate their ASCII drawings, but it's true that representational art can be