Probble, quingel, and himumma; none of these words mean anything and they can’t be found in a dictionary, but something about them is funny. Or at least, they should be, according to a new study.
Chris Westbury, professor and psycholinguistics researcher in the University of Alberta’s Department of Psychology, conducted a study examining whether the humour of made-up words was predictable.
Westbury said the study, which was published in the Journal of Memory and Language, was inspired from working with aphasic patients, who have language deficits following brain damage. While testing their ability to differentiate real words from computer-generated non-words, Westbury saw something significant.
“We saw that the people would sometimes laugh at our non-words,” Westbury said. “One of the words was ‘snunkoople,’ and it stuck with me because there’s something funny about it.”
From there, Westbury and other researchers attempted to find a connection between computer-generated non-words and humour. After another experiment proved that subjects were consistent in terms of their relative humour ratings for non-words, Westbury sought a way to predict this phenomenon.
Inspiration for the theory, however, came from an unexpected place. The World as Will and Representation, an 1818 work by noted pessimistic philosopher Arthur Schopenhauer, contained a promising idea about humour.
Schopenhauer’s theory essentially stated that humor was a violation of expectation; the greater the violation, the funnier the joke. It was this linear relationship described in the theory that Westbury set out to investigate.
Quantifying humour had been attempted in the past, but the format of jokes left so many outcomes available, they were impossible to quantify. Words, on the other hand, can have their relative weirdness measured via Shannon entropy, or how abnormal their letters are.
Non-words that were low on Shannon entropy were usually perceived as funnier, and higher entropy words tended to be thought of as more serious. Westbury’s second experiment examined how consistent people were at making this choice.
“We had people choose which of two non-words were funnier to them, and manipulated how far apart they were in entropy,” Westbury said. “The idea being that the further the distance, the easier the decision.”
From all the experiments, the study concluded that not only was there a relationship between Shannon entropy and humour, but also that the relationship was linear and predictable. Though further studies in joke humour get immensely more complex, Westbury said that a slight tweak in linguistic methodology could draw new results out of the established method.
“(The experiment) was just a probability calculation for the letters,” Westbury said. “But it means you could see two different non-words with the same entropy. Pushing the idea would involve seeing what other probabilities are being violated that we could control, and that would lead to building funny non-words.”
This research would have immediate implications in fields such as product naming, where entropy could be calculated and related directly to the intended use. Westbury said that when naming more serious products, a word low in entropy might not be the best choice.
Westbury said his findings have implications when it comes to how people think about emotion. Being able to predict emotional reactions to humour with probability models is a significant discovery.
“The idea that we’re doing probability calculations by emotion is really cool,” Westbury said. “It suggests emotion is a way of doing math, but you don’t have to do a calculation; the answer is delivered emotionally.”