This is some transhumanists’ dream, a future where we can completely trick out our bodies and transcend the limitations of human biology. It’s also a description of what the title character from the 1983 cartoon Inspector Gadgetcan do.
Rose Eveleth is an Ideas contributor at WIRED and the creator and host of Flash Forward, a podcast about possible (and not so possible) futures.
For those who aren’t familiar with the cartoon, the premise is simple: Inspector Gadget is, as his name implies, an inspector, or detective. He’s also a walking gadget, who can turn his body into nearly anything. And yet, with all that power, Gadget can’t solve a single mystery. Every episode Gadget is called upon by his boss Chief Quimby to help solve a crime, nearly all of which are perpetrated by the villain Dr. Claw. For some reason or another, Gadget is always accompanied by his 10-year-old niece, Penny, and her dog Brain. And despite being equipped with every tool he could possibly need, it’s the brilliant Penny, a completely boring noncyborg, who saves the day every time.
Sure, the cartoon (and subsequent film adaptations) are over the top and ridiculous. But our hapless detective can teach us something about the ways we think about bodies, bionics, data, and the future of human-machine interfaces. Gadget’s antics poke real holes in the fantasies that some transhumanists and “body hackers” have about how the body works, and what we might be able to ask it to do.
It is in scenes like this that I think of two things: Three Mile Island and butter production in Bangladesh. Let me explain. The former is the biggest nuclear meltdown in American history. The latter is a spurious economic predictor proposed in 1998 to poke fun at forecasting markets. But they’re tied together by the same thing that dooms Gadget: an excess of information. Three Mile Island (like Chernobyl and other nuclear accidents) happened for a variety of reasons—lax regulations, slashed budgets, overworked employees, scientific rivalries—but during the most critical moments of the disaster, it was marked by information overload. The control panel at the nuclear plant was designed to display all kinds of data, but there was no way the operators could keep track of the whole system at once. In a sea of signals, you can miss the most important ones.
Or, you can see one that means nothing at all, as in the case of butter production in Bangladesh, a signal that economist David Leinweber described in 1998. According to his calculations, three things could “explain” the performance of the S&P 500 with 99 percent accuracy: American cheese production, the Bangladeshi sheep population, and butter production in Bangladesh. Leinweber was intentionally poking fun at the methods he employed, arguing that with enough data but insufficient context you can correlate almost anything. At first, Leinweber wasn’t even going to publish the work, he simply thought it was a funny trick. But then, “reporters picked up on it, and it has found its way into the curriculum at the Stanford Business School and elsewhere,” he writes in the paper he did eventually publish. “Mark Twain spoke of ‘lies, damn lies and statistics.’ In this paper, we offer all three,” he writes.
The point here is that more data doesn’t mean much if you can’t do something useful with it. You can have all the data in the world and be just as useless as Inspector Gadget. Today, in conversations about AI people talk about the rise in computing power, the rise in giant data sets, and how those two things will inevitably lead to super-powerful systems. But there’s a step missing in those arguments, and it’s a crucial, difficult, and time-consuming one: If all that data isn’t labeled or organized in a meaningful way, even the greatest supercomputer can’t do meaningful work with it. (This is why you’re still asked to train the algorithms with a captcha any time you sign up for a newsletter, for example. These data sets often need human eyeballs and brains to work on them, and those are usually pretty expensive.)
This doesn’t just apply to data. More tools, as any overexcited new home chef can tell you, don’t make you a better cook. Inspector Gadget has, it seems, every possible piece of gadgetry at his disposal, but he can’t see the forest through the bionic trees. Penny, on the other hand, undistracted by an endless number of technological choices, remains clear-eyed to save the day.
Penny is not completely untechnological. She is, in fact, a brilliant inventor in her own right. In many episodes she builds and deploys devices to help solve the case—a radar system, a long-range camera, a smart watch. But where Gadget has technology embedded within him as a bodily element, and as such has less than perfect control over it, Penny uses technology as a tool outside her body.
Were the creators of Inspector Gadget trying to poke fun at the notion of Cartesian duality? Who can say, really. But the location of the technology here highlights the way we think about integrating our machines with our inventions.
The “body as machine” analogy dates back to at least the industrial revolution, when the idea that the body might be like the machines we were creating took hold. As Randolphe Nesse, a professor at Arizona State University, wrote in his essay “The Body Is Not a Machine,” “The metaphor of body as a machine provided a ladder that allowed biology to bring phenomena up from a dark pit of mysterious forces into the light where organic mechanisms can be analyzed as if they are machines.” The analogy proved valuable to evolve our understanding of the body.
Now, this “body as machine” trope is the well from which much of current-day body hacking springs. If the body is a machine, if the brain is a computer, if muscles are simply pulley systems, then we can go in and tinker at will. We can create Inspector Gadget because we can create computers and Boston Dynamics robots and spaceships. But the body is not a machine. It does not behave like one. We did not invent it and we mostly cannot program it. Its parts are not plug-and-play, nor are they discrete. And many researchers and writers have highlighted how our insistence on this model is actually hurting progress. “In psychiatry, thinking about the mind as a machine has led to a debacle about diagnosis,” writes Nesse.
As a (reluctant) tech reporter, I get a lot of press releases for products that only work if you assume the body functions like a device. If we could just measure things more accurately, these pitches argue, we could solve eating disorders, infertility, chronic pain, depression—the list goes on. Silicon Valley tech gurus are all in on “biohacking” their bodies just like they “growth hacked” their startups. The company Daysy claimed that it could help people prevent pregnancy by measuring body temperature—when your body creates X signal, we know it’s doing Y thing. Daysy claimed that using this method it could identify whether a user was fertile with 99.4 percent accuracy. Turns out the paper Daysy was using as the basis for that claim was recently retracted.
Body-as-machine fantasies also imagine that the technology will work as we hope, every time. But anybody who’s ever used, well, any kind of device can tell you that that’s not true. Inspector Gadget’s entire comedic repertoire (unbeknownst to him of course) lies in exactly these failures. His gadget arms extend, but won’t contract, his coat inflates when he doesn’t want it to, his feet turn to roller skates when he wanted skis. His existence makes plain the ridiculousness of assuming that something like Robocop could truly happen—a seamless, perfectly efficient blend of man and machine.
It’s comforting to think of the body as a machine we can trick out. It helps us ignore the strange fleshy aches that come with having a meat cage. It makes a fickle system—one we truly don’t understand—feel conquerable. To admit that the body (and mind that sits within it) might be far more complex than our most delicate, intricate inventions endangers all kinds of things: the medical industrial complex, the wellness industry, countless startups. But it might also open up new doors for better relationships with our bodies too: Disability scholars have long argued that the way we see bodies as “fixable” ultimately serves to further marginalize people who will never have the “standard operating system,” no matter how many times their parts are replaced or tinkered with.
There is another scene in the first episode of Inspector Gadget that makes clear the distinction between Penny and Gadget. While Penny uses her radar system to detect the mechanical monster in the lake, Gadget has deduced that the scientist they have been charged with protecting must be hiding up a tree, and is walking along the lake shouting for the professor to come down. He is quite literally “barking up the wrong tree.” I’d argue that the tech industry’s current fixation with the body as a hackable device is exactly that.