You are sitting in a room full of filing cabinets. The only egress is a single slit in the wall. Paper comes through the slit, but all the writing is in Chinese. However, there are English instructions for taking the piece of paper, sorting through the filing cabinets, and using the contents of the cabinets (also in Chinese) to respond to the question on the piece of paper in Chinese. You push it back out through the slit, to the person on the other side.
If the person on the other side has their question answered, does this mean you “understand” Chinese?
- A Paraphrase of the Chinese Room Argument posed by John Searle in Behavioral and Brain Science, 1980.
Via Critical Distance, this post on the death of the author in The Elder Scrolls V caught my attention. The argument boils down to this: Computers can’t generate emotionally compelling content, therefore automated content generation is a folly.
On the opposite side of things, Eric at TheGameCritique recently said:
“Enslaved is the game that finally made me think about abandoning single player games and their strictly authored narratives”
There’s a bit of dissonance in between the two opinions – on one hand, holding up human-generated content as worthy and neccessary, on the other the mounting frustration towards poorly-written single-player content that permeates the industry. Ignoring the moral/artistic point of view for a second, the capitalist reality of the videogame industry is that content is expensive to generate and the amount of demand for high-quality content is unknown (Multiplayer games tend not to have much content, but lots of fun mechanics. See also: Minecraft). That leaves a capitalist system with two options: re-evaluate your price points and margins and quality gates – finding the correct market position for great content vs mediocre content vs multiplayer content, which I suspect will leave high-quality writing out in the cold. Or work from the supply end - make content creation cheaper through technological advances.
Of course, automated creation of creative content taps into something larger, as the “death of the author” post captures. There is a long-standing debate about what artificial intelligence is capable of – “soft” AI, which holds that a computer program can never equal a human brain in creative power, vs “hard”* AI, which holds that a computer program is technically capable of mimicking a human brain, but it’s just a question of understanding the brain better, and building faster, more dynamic hardware and software to match human processing power. How you feel about “hard” vs “soft” AI reflects how you view the human brain: is it a mystical and unknowable complexity? or is the human brain just another form of hardware, albeit an extremely complex one with at least 100 billion neurons (in this metaphor, the neuron is best understood as an independent processor core), and at least 60 trillion connections in the form of electrical signaling paths and hormones.
* or sometimes “Strong AI”, since attaching ‘hard’ to scientific concepts is indelibly linked to the colloquial ‘hard’ vs ‘soft’ skillsets, where things that are presented as dominated by women in our society such as communication skills are labelled ‘soft’ ie ‘easy’ and thus are undervalued. plus every time I see the word “hard” I mentally insert the word “cock” after it, because I am 6 and penis jokes are hilarious
On top of the hardware challenges, the philosophical question posed by the Chinese Room thought experiment is this: Even if we have a perfect algorithm (in this case, the instructions for using the filing cabinet), does the computer executing the algorithm (the person sitting inside the room) ever demonstrate “understanding”? Or is executor merely a soulless automaton following the instructions printed on the page? The argument is 30 years old now, and in some ways the debate is still burning – although the consensus seems to be that the experiment hinges on the definition of “understanding”. If human “understanding” is simply the result of a biochemical algorithm wired into our brains, why is it impossible that a computer can share that same algorithm (complexity of implementation aside – this is, after all, a thought experiment)?
We can rephrase this for relevance: If human creativity is simply the result of a biochemical algorithm wired into our brains, why is it impossible for a computer algorithm to also demonstrate creativity? You can rail against “template-based storytelling” all you want, but – what story isn’t template based? From Hero With a Thousand Faces to the modern Hollywood 3-act movie, from genre cliches to Mad Libs, you have to work fucking hard to find a story that doesn’t fit into some template. It’s not grounds for disqualification!
The question of procedural creativity isn’t just a highfalutin’ academic exercise. It’s literally the goal of Dwarf Fortress – dynamic stories that emerge naturally from the game mechanics. Is it completely 100% there yet? No. Are people still able to find compelling stories in it? Why don’t you look at these beautiful illustrated stories about people’s encounters with the game, and tell me. Tell me a computer system isn’t able to hack a quick-and-dirty interface with the human brain, allow our imagination to fill in the gaps, and result in a beautiful, logical progression of events that “tell” a compelling story.
Also? we have an algorithm for dynamically creating rich, meaningful stories on the fly. It’s called improvisational theatre! Sure it hinges on human trust and some bits that aren’t necessary mathematical in nature. You can’t plug it into a computer and hit “run”. But we understand the nature of storytelling, what makes a story competent, what elements are just good and worth repeating. It’s hard as hell to pull off – I’ve done only 1 or 2 successful “harold” long-form improv stories after a year of rigorous training, and the process nearly destroyed everyone involved – but it’s possible. And quite honestly, if you pull it off, it’s beautiful and haunting and as John Candy put it, “oh my GOD this is better than sex”.
Plus automation is being used for other forms of generation: game design, Mario levels, Starcraft AI, etc. This is the end goal of AI research, finding common tasks and hacking computer routines to automate them. And the human brain is primed to accept input – I don’t literally think my cat understands much besides ‘food’ and ‘getting rubbings’, but that doesn’t stop me from projecting human qualities on to her – the need for attention, a specific and unique personality, agency that goes beyond instinct. Similarly, humans see faces where there are none, and look for explanations where there are none. The brain is suspectible to hacks, in other words. Our brains are pattern-matching machines, and we draw connections even when those connections are not created by a system with “understanding” or “creativity”.
So long story short: automated content generation is definitely possible assuming you’re a Strong AI proponent (which, again, most computer scientists are. while the brain’s specific algorithms are not known to any significant degree, the general mechanisms are well understood and able to be modeled by a formal system). Automated content generation probably can be pretty good – not right now, but at some point into the future. It might be good enough to trick people into thinking it’s hand-crafted. However, Elder Scrolls 5 will almost certainly fail at this task and the game will be as boring and aimless as the past 4 games. Hey-o!