

Discover more from Building Your Future
The AI Of The Future Might Work A Lot Like Octopus Arms
How A Novel About Intelligent Octopi May Predict The Future Of Human-AI Teaming
I wrote this article purely on impulse last week, and I need to warn you up front, it’s going to sound weird. I’m going to summarize some articles about AI that someone else wrote, then go off on what seems like a really weird tangent where I review a sci-fi novel about intelligent octopi in space. I promise I’ll tie it all together in the end.
Artificial intelligence has been in the news a lot over the last year, which has seen a proliferation of AI art programs. AI has particularly dominated headlines– and social media conversations– over the past month or so thanks to the release of ChatGPT, a program which can write passable conversations, songs, poems, advice, and at least sometimes present the appearance of possessing real understanding of the topics it writes about.
On New Year’s Day, economist Noah Smith published an article in which he described AI as a third revolution in how humans learn about the world. His article is well worth a read– not just as an article about AI, but more so for how he classifies the most fundamental technological developments in human history.
The first revolution, in his telling, was the ability to record information– first orally, then in writing– which allowed us to build on each other’s knowledge, generation after generation. The development of grammatical language was arguably part of this first revolution.1 It’s mostly done now, but I’d argue that improvements in information storage, presentation and organization– like search engines, spreadsheets or the Dewey Decimal System– are part of it.
The first revolution allows us to say “I know this because I heard/read it.”
The second revolution was science, which has allowed us to understand the underlying principles behind how things work. The development of science is still extremely ongoing. Scientific theories and principles tend to be parsimonious– they’re short and simple, yet explain a lot.
Science allows us to say “I know this, because it logically follows from these underlying principles. I understand how this works.”
The third development, artificial intelligence, relies on extremely complex algorithms. GPT-3, the language model that ChatGPT is built on, uses 175 billion parameters. Unlike science, it’s not parsimonious– while we can understand the general principles it operates on, the reasoning behind any particular operation is way too complex for us to understand.
AI allows us to say “I know this because my AI program figured it out for me, but I don’t know how the program reached that conclusion.” Also, unlike recording or science, AI can also act on that understanding for us. As Noah points out, this could end up feeling a lot like magic– it works, more or less predictably, but we can’t understand how it works except at a very abstract level.
As per the title of his article, Smith opines that this will work a bit like magic– producing somewhat predictable results via means that the user can’t understand except on the most abstract of levels.
The question that now arises is, what does the future look like when we have all these AI helpers feeding us knowledge without explaining how that knowledge was arrived at, and doing stuff for us for reasons we don’t fully understand?
As it happens, a few months ago I read a book that kinda sorta answered that question– a hard sci-fi novel about sentient octopi. In space of course.
Children Of Ruin: The Novel That’s About Space Octopi But Also Kind Of About AI
Children of Ruin is the second series in the Children of Time trilogy. In the first novel, Children of Time, humanity tries to uplift monkeys to human-level intelligence, but screws up and does that to portia spiders instead, which is more interesting because portia spiders are fascinatingly intelligent for spiders and uplifted monkeys would’ve just been shittier humans anyway.
So in book two, humans and their new spider bros encounter uplifted octopi, along with some other stuff that doesn’t matter here. This proves an even bigger challenge than dealing with spider-kind, because octopi do not experience consciousness in the same way humans do.
Let’s start with what we know about octopi in real life. Two-thirds of their neuronal mass is in their arms rather than their brains. Unlike humans, their arms are capable of highly complex reflex reactions, and can learn new ones and to some degree act autonomously without input from the brain.
As a trade-off for this, octopi have more difficulty consciously controlling their arms– their brains don’t have a “mental map” of their bodies the way we do, and they have a very limited sense of proprioception– they usually have to look at their arms to be fully aware of what they’re doing.
Some scientists even describe octopi as having nine brains– a central brain and one for each arm. New research suggests that they do have at least a limited capability for proprioception and conscious arm control, however, so their arms aren’t entirely autonomous.
The novel isn’t totally accurate to current understanding of octopus neurology– Tchaikovsky assumes the arms are more autonomous from the brain they they probably are.
Anyway, in Children of Ruin, a mad scientist infects octopi with an uplift virus that causes them to grow bigger and more intelligent with each generation until they’re roughly as smart as humans. The virus makes both the central brains and the arm brains more intelligent in equal measure, without changing the fundamental relationship between them.
So the octopi have a central brain called a crown, eight arm brains called the reach, and a tenth brain called the veil which controls their skin pigmentation in response to the crown’s emotions. The veil effectively leaves octopi unable to hide their emotions, and appears to be entirely Tchaikovsky’s invention, as I’ve seen no indication that octopi don’t (in real life) consciously control their pigmentation changes.
Almost everything the octopi do is controlled by their reach. Their arms build and repair technology autonomously; the crowns don’t understand how any of it works. Even a large part of communication between octopi is conducted by the reach, in the form of wrestling matches which act as something halfway between an argument and a fistfight.
The crown sets goals, and the reach figures out how to pursue those goals. The crown knows what the reach has decided the octopus should do, but not necessarily why or how it leads to the goal.
The whole idea of ten separate brains does want for a bit of internal consistency here. Sensory input is spread between different parts of the octopi’s “minds,” with the crown being responsible for vision and hearing (and presumably taste and smell, though that never comes up), while the reach receives tactile information. How or whether this information gets synthesized if it’s divided between different “brains” isn’t explained that I can remember.
In any case, the whole thing provides a decent mental model for how future humans equipped with an array of AI sidekicks might work. The ten brains of these space octopi operate a bit like a presidential cabinet that doesn’t communicate very well.
The crown is like the president– it sets very high-level directives, primarily expressed as goals rather than means.
The reach is like the various department heads. They translate those high-level directives into policy and carry out those policies. Crucially, they never explain the reasoning behind their actions to the president. It’s just, “here’s what I’m doing, I promise it’ll lead us to that goal you laid out.”
The veil is like a press secretary who sits in on these meetings, but never actually talks to anyone in them. After each meeting, the veil gives a press conference that provides the overall gist of what the reach and crown were saying– which again, consists only of emotions and very generalized ideas of what’s being done, because the reach doesn’t explain anything.
Due to the lack of communication, the president can’t tell his press secretary to tilt the information at all– he can’t be like “don’t tell them I’m scared, project confidence instead.” The information conveyed by the veil is therefore vague, but honest.
Most importantly, they all act like this is normal. At no point does the crown think it’s completely crazy that it has no idea why it’s doing what it does, at least any more than we think it’s crazy that a lot of our mind works on an unconscious level.
This has two big upshots in the book. First, octopi are incredibly fractious and mercurial. They’re divided into countless factions, with small groups of octopi constantly switching between factions, and individual octopi constantly moving to different groups within a faction or different factions altogether.
These changes of allegiance always come on suddenly to all outward appearances– there’s never much, if any, outward sign that an octopus is wavering or reconsidering before it suddenly changes its allegiance or even its overall ideology.
Second, octopi can’t really explain why they do this beyond “My reach decided it was the best course of action,” and maybe some really vague justification like “humans are dangerous.” Even once humans establish communication with the octopi, it’s frustratingly hard to communicate effectively because it’s not just the language that differs between the two species; it’s the entire way that they think.
AI Might Be Kind Of Like That, But Not Quite
Tchaikovsky is almost certainly overstating the degree of separation between an octopus’s arms and brain. Whether he is or not though, future AI probably won’t be quite so incomprehensible.
It’s true that, as Noah Smith points out, we won’t be able to fully understand the “reasoning” behind an algorithm that takes 175 billion factors into account. However, future artificial intelligence programs probably could be design to at least provide a very general summary of why they reached the conclusions they did.
It wouldn’t be enough to provide true, full understanding to a human user– probably something similar to an expert trying to describe their work to a layperson, at best. Hopefully an expert with good communication skills.
If AI is a bit like magic, maybe that means it will be like casting spells. But it might be more like having a demon to do your bidding– one that can be asked to explain itself, albeit vaguely.
In any case, read Children of Ruin. I wouldn’t go so far as to say it gives you a preview of what AI will be like, but it will at least help you start to wrap your head around the idea.
1. Many animal species do have a rudimentary form of language, but it lacks grammar and consists of simple declarations. “food here,” “tiger,” “hello friend,” etc. Without a grammatical structure, it can’t be used to express more complex ideas.