Welcome to part two of my special on AI and humour. This is a little overview of the field: not too techy — funny videos and cartoons below — and hopefully a good general introduction.
Just to set the parameters: I normally work with AI when working on developing conversational AI projects: most recently, I did this for an international bank. This involves crafting and tweaking the algorithms and language to make the conversation more, well, human. So this might be written as:
Person (me, Mr Paddy Gilmore) + AI
Today we’re looking at ‘pure AI’: how humour can work with machines without those dastardly, messy, clumsy humans getting involved.
So! When it comes to the development of humour and AI, there are two areas of interest: generative humour and humour detection. Let’s take them in turn.
Generative AI
At the risk of you unsubscribing, Dear Reader, here’s a joke created by computer:
What do you get when you cross a murderer with breakfast food?
A cereal killer.
The machine that generated that gem was called JAPE (Joke Analysis & Production Engine): it was born in 1997, and this is a good example of generative humour.
It works very simply: take some super-huge computers, template-based systems, and a massive number of words, puns and meanings. Code them in, add the correct algorithms, and you’re pretty much good to go.
You might ask: why do this? Good question. Well, there are educational applications. JAPE was later followed by STANDUP (System To Augment Non-speakers Dialog Using Puns)1, as a practical application for language-impaired children. No-one would argue with the usefulness of that.
The drawback with all of this is that it’s very template driven: knock-knock or lightbulb jokes, for example. There are lots of puns. And that’s limiting. These lovely cartoons (below) by Charlie Hankin don’t use spoken or written language. But they’re still funny.
…However, generative AI is only one side of the coin.
Humour Detection and AI
The other side of the coin is humour detection and AI. This is where a machine tries to detect the use of humour and, ideally (at least ideally for the people running the project and their profit-hungry investors), can replicate this.
To explain how difficult this is, imagine you’re developing a self-driving car:
You would begin, I would imagine, by focussing on the laws of physics and, in particular, the laws of motion. You would use the laws of engineering to explore speed, braking distance, and steering. And there would be laws of optics to ensure the cameras on the car could see what’s around as you’re driving2.
In short: laws.
But there are many sciences in which we don’t have laws. And humour is one of them. Instead, we have theories. You’ve got Aristotelian Theory, or Hobbesian Theory, or Freudian Theory or — if you really want to impress people at parties — mention “the Ontological Semantic Theory of Humour”.
There are many, many others. And like most theories, some are superseded and our knowledge is increasing all the time. And (also) like most theories, it helps to have a specialist there to tell you which are valuable and which you can do without: this is a good part of what I do.
But this means that programming a machine is difficult.
In short, AI is good at creating humour, but not good at knowing what humour is, nor how it works.
What’s worrying is that, in place of this ignorance, there is a moralistic side of AI. For example, on the back of my newsletter last week, a good friend3 sent me this from ChatGPT:
It sounds so considerate, right? But this attitude would take away the joys of a great ad, like this one featuring, er… a stupid man:
To which you might counter: “Ah, well, that’s an old ad, from 1986. And times have changed. And it’s a tobacco ad: ugh.”
OK. So take this stupid man:
Ken starred alongside Barbie in last year’s huge critical and box office success: ‘as of January 5, 2024, Barbie has grossed a worldwide total of $1.4 billion’4.
And part of his appeal is… he’s a little bit stupid. Watch the song below. Intelligent men, in my experience at least, don’t plead, “I’m enough. And I’m great at doing stuff.”
We laugh at, sure. But we laugh with, too. Why? Because we’ve all made mistakes: OK, we might not have turned up dressed as a chicken at a drinks party, and we might not have a wardrobe made up of garish pastels, but it’s the acknowledgement of our fallibility that makes it funny. To be human is to mess up.
Moreover, if I could wave a magic wand, I would prefer a ChatGPT that might say, “Making a joke about a stupid man is too ambiguous and subtle an area to comment on. Many of the great comedians — including Charlie Chaplin and Laurel & Hardy — used stupidity and naivety as the bedrock of their humour. In short, don’t ask a computer.”
But ChatGPT doesn’t say this. Because the one subject it won’t talk about is its ignorance.
***
One of the curious predicaments about writing on humour and AI is that it feels more like writing journalism: this is only a snapshot, and things will change. It could well be the case that, as we get to understand humour better, and as AI continues its ever-faster march into the future, the two will come together.
But humour has a funny (forgive me) way of getting through the cracks. The words humour and human are etymologically very close — and for this, at least for now, we should be grateful.
Many thanks for reading,
Paddy
Book a free half-hour meeting with me here.
pg@studiogilmore.com
+44 7866 538 233
LinkedIn: here
Forgive me: these acronyms are excruciating, I know.
The highest I got in a physics exam at school was a pitiful 37%. So even writing that paragraph feels like doing a PhD.