"Explain It to Me Again, Computer"
What if technology makes scientific discoveries that we can’t understand?
February 25th, 2013
When scientists think about truth, they often think about it in the context of their own work: the ability of scientific ideas to explain our world. These explanations can take many forms. On the simple end, we have basic empirical laws (such as how metals change their conductivity with temperature), in which we fit the world to some sort of experimentally derived curve. On the more complicated and more explanatory end of the scale, we have grand theories for our surroundings. From evolution by natural selection to quantum mechanics and Newton’s law of gravitation, these types of theories can unify a variety of phenomena that we see in the world, describe the mechanisms of the universe beyond what we can see with our own eyes, and yield incredible predictions about how the world should work.
The details of how exactly these theories describe our world—and what consists of a proper theory—are more properly left to philosophers of science. But adhering to philosophical realism, as many scientists do, implies that we think these theories actually describe our universe and can help us improve our surroundings or create impressive new technologies.
That being said, scientists always understand that our view of the world is in draft form. What we think the world looks like is constantly subject to refinement and even sometimes a complete overhaul. This leads us to what is known by the delightful, if somewhat unwieldy, phrase of pessimistic meta-induction. It’s true that we think we understand the world really well right now, but so has every previous generation, and they also got it wrong. This is why scientists love Karl Popper, who says we can never prove a theory correct, only attempt to overturn it via falsification. So we must never be too optimistic that we are completely correct this time. In other words, we think our theories are true but still subject to potential overhaul. Which sounds a bit odd.
But when properly internalized, this can be wonderfully exciting. A professor of mine once taught a class on a Tuesday, only to read a paper the next day that invalidated what he had taught. So he went into class on Thursday and told the class, “Remember what I told you on Tuesday? It’s wrong. And if that worries you, you need to get out of science.” Science is always in this draft form and this is most clear at the frontier: where scientists work and why they find their inquiry so exciting.
As I discuss in my book The Half-Life of Facts, this is not always a process of completely forward progress, but overall we are improving our view of the world and reducing error in our understanding. This was delightfully encapsulated in a quote by Isaac Asimov: “[W]hen people thought the Earth was flat, they were wrong. When people thought the Earth was spherical, they were wrong. But if you think that thinking the Earth is spherical is just as wrong as thinking the Earth is flat, then your view is wronger than both of them put together.”
As we have improved our understanding of the shape of our planet, we have overhauled what we thought it looked like, moving from flat to perfectly spherical to an oblate spheroid. And along the way, we have reduced the amount of error in the measurement of our surroundings.
But whether or not science is always moving forward or whether we think we have the final view of how the world works (which we almost certainly do not), we pride ourselves on our ability to understand our universe. Whatever its complexity, we believe that we can write down equations that will articulate the universe in all its grandeur.
But what if this intuition is wrong? What if there are not only practical limits to our ability to understand the laws of nature, but theoretical ones?
On the practical side, it’s unsurprising to recognize that science might move less quickly than it should simply due to the massive size of what we know: A single individual can comb through only so much of the literature. For example, imagine there are two papers somewhere in the literature, one of which says that A implies B, and another that says B implies C. With the incredible growth of the scientific literature, it’s impossible for anyone to be familiar with all of the papers published in all scientific disciplines, let alone the new research in one’s own subfield. So these two papers remain uncombined, until a computer program finds some way to stitch these two ideas together, recognizing that A implies C, a discovery that was practically impossible due to the vast size of the literature.
These sorts of limits are exciting because we can construct algorithms to help us with these kinds of problems, in which we become able to discover in partnership with machines. But once shown such a computationally discovered insight, we readily can grasp its meaning and the explanatory power it can provide.
But what if it were possible to create discoveries that no human being can ever understand? For example, if I were to give you a set of differential equations, while we have numerical and computational methods of handling these equations, not only could it be difficult to solve them mathematically, but there is a decent chance that no analytical solution even exists.
So what of this? Does such a hint of non-understandable pieces of reasoning and thought mean that eventually there will be answers to the riddle of the universe that are going to be too complicated for us to understand, answers that machines can spit out but we cannot grasp? Quite possibly. We’ve already come close. A computer program known as Eureqa that was designed to find patterns and meaning in large datasets not only has recapitulated fundamental laws of physics but has also found explanatory equations that no one really understands. And certain mathematical theorems have been proven by computers, and no one person actually understands the complete proofs, though we know that they are correct. As the mathematician Steven Strogatz has argued, these could be harbingers of an “end of insight.” We had a wonderful several-hundred-year run of explanatory insight, beginning with the dawn of the Scientific Revolution, but maybe that period is drawing to a close.
So what does this all mean for the future of truth? Is it possible for something to be true but not understandable? I think so, but I don’t think that that is a bad thing. Just as certain mathematical theorems have been proven by computers, and we can trust them, we can also at the same time endeavor to try to create more elegantly constructed, human-understandable, versions of these proofs. Just because something is true, doesn’t mean that we can’t continue to explore it, even if we don’t understand every aspect.
But even if we can’t do this—and we have truly bumped up against our constraints—our limits shouldn’t worry us too much. The non-understandability of science is coming, in certain places and small bits at a time. We’ve grasped the low-hanging fruit of understandability and explanatory elegance, and what’s left might be possible to be exploited, but not necessarily completely understood. That’s going to be tough to stomach, but the sooner we accept this the better we have a chance of allowing society to appreciate how far we’ve come and apply non-understandable truths to our technologies and creations.
As I’ve argued, if it’s our machines doing the discovering, we can still have naches—we can take an often vicarious pride and joy in the success of our progeny. We made these machines, so their discoveries are at least partly due to humanity. And that’s exciting, as these programs of the future begin to uncover new truths about the universe.
They may just inject a bit more mystery into the world than we might have bargained for.
The Half-Life of Facts: Why Everything We Know Has an Expiration Date
Post a Comment