Monday, October 1, 2018

Forget Space Travel


I’m surprised how much space travel there is in current science fiction.  In the time of Jules Verne and as late as Hugo Gernsback, space travel was a wild idea.  Just thinking about it required unusual imagination.  Today’s space ship stories are not astounding or amazing.  They’re products of aficionados demonstrating the fine points of what’s possible within an elaborate set of constraints.  That’s not to say these stories aren’t entertaining and original. Freedom from terrestrial challenges expands the boundaries of literature.  Unusual human dilemmas are explored, often revealing beauty. The imaginative parts of modern science fiction stories are more likely to be in the bots, clones, reality simulations and plot twists while the space ships, alien planets and hair-raising interstellar car chases are just the assumed background.  But the original thrill of space travel was the stunning wonders of distant worlds that were simply incredible at the time they were written.  That is gone.  We know too much now about those distant worlds. 

A lot of what we know now tells us how barren and hostile to life space and other planets are.  This raises some huge problems, so huge that I believe that the biggest lesson to be learned is that we should forget about space travel.  We are inseparable from earth.

The first problem is living in space, whether in spacecraft or on alien planets. Human experience in space after fifty-odd years is limited to encapsulating a few persons in vessels filled with things brought along or hoisted up from earth, at enormous expense.1  A sustainable presence on other planets would require huge efforts to either supply them from earth or find ways to create and maintain a human-friendly environment using indigenous materials.  No, we can’t assume the atmosphere will be breathable on any planet within reach – I still see that in recent stories.  Earth is turning out to be a very unusual planet.  A recent article by John Gribbin2 shows just how unusual.  There’s only a small band within the galaxy where conditions are just right to allow life as we know it to start.  Either it’s too hot with radiation from huge and exploding stars toward the center, or it’s unlikely that rocky planets could form because of the low density of necessary materials beyond our goldilocks belt.   And the particular size and orbital distance of our single big moon is very unusual, having resulted from the collision of early earth with another early planet that blew the moon out of the resulting kinetic blob.  It stabilizes earth, and the way the separation occurred affected both the earth and the moon’s composition, which has been important to the formation of life on earth.  Those unique conditions eventually led to extensive photosynthesis that produced an oxygen atmosphere.  Maybe the reason for Fermi’s paradox isn’t so paradoxical after all: the calculations of how many life-supporting planets are out there are based on wrong assumptions. 

But there are over 100 billion stars in our galaxy3, and at least 100 billion galaxies in the universe4, you say, and even if improbable for any given star, a few of them must have conditions similar to ours or favorable to some kind of life.  OK, but the nearest star is over four light years away.  We should call these distances radio years, because that’s how long it would take for any communications signal to get there, not to mention actually going there.  And that’s one-way.  So let’s draw a circle around the area where we could have sent a signal and then gotten an answer since, say Plato’s time.  That would be a distance of about 1,200 light years.  That’s about one percent of the distance across our own little galaxy.  A sphere with this radius is about one-tenth of one percent of the galactic volume.  Even if we did receive a red-shifted peep from a civilization over the infrared rainbow in the next galaxy over, we couldn’t communicate with it.  Even if we learn how to govern ourselves, given genetic drift our species will likely be gone before our galactic neighbors’ smart phones could even ring with our call back.  This really goes beyond the problem of finding a planet we can colonize.  It means that the part of the universe that is effectively available for communication or even to see what is happening right now is very, very small. 
 
Some efforts have been made to build a closed system in which a few “terranauts” could live without a welcoming atmosphere.  It was a huge effort and no matter what they tried it didn’t work.5 It’s one thing to do it where you can just call things off and open the doors, or go on bottled oxygen until you can de-orbit, and another in a place where you can’t breathe, and are bombarded by radiation outside your shelter.6
  
Some authors celebrate the prospect of “terraforming” on Mars and other planets, to make them habitable.  We hear this hubris as it’s becoming increasingly obvious that we can’t even reliably prevent earth from becoming uninhabitable.  We can’t even terraform earth!  Can we really manage the stable transformation of a whole other, very distant, planet, which has hard physical reasons for being like it already is?

A related problem is cost, in energy and effort. In writing about space travel, we don’t appreciate the enormous distances to other planets, other stars, other galaxies, and the difficulties in getting there and back, and even communicating. Even Mars, the second closest planet to earth, is really far away.  It takes months to get from Mars to Earth using any foreseeable technology7, and a lot of energy.  Maybe we could find ways of building expendable boosters on Mars and harvesting solar energy to escape Martian gravity. Of course it would take a bit longer during the times when Mars is on the other side of the sun.  By the time you figure out the costs of getting everything you need there, and then getting what you want from Mars or even the much closer moon, and down to earth through the atmosphere to where you need it, and cleaning up after yourself, you will have to wonder if it wouldn’t be better to go somewhere else.  O. Glenn Smith, former manager of shuttle systems engineering at NASA’s Johnson Space Center, estimates that pioneering for establishment of a small Mars colony would cost over $2 trillion.8 This doesn’t include maintaining the colony once established. NASA has yet to produce a cost estimate for the first human launch to Mars. Maybe then we go to Venus, the closest planet.  No, there’s a bit too much atmosphere there.  Well, how about the other planets? Well, they are even farther, son, much farther.  Setting sail into the emptiness of space to colonize a planet is not the same as crossing an ocean propelled by the wind to a place where you’ll find arable land, game and native peoples ready to exploit.

But let’s say, in the spirit of supreme imagination, that the practicalities of cost and energy cited above can someday be overcome by new technologies. Why do we have to actually send our frail human bodies out there?  Is it to extract precious metals or some other rare substance?  Is there anything we need that much, that we don’t have or can’t make?  Maybe, despite the costs and harsh conditions, we need to go out there just because it’s there, to explore, to understand the universe, satisfy the deep fundamental drive of curiosity that gives basic science its true value.9 But it’s increasingly clear that exploration can be done robotically, and better, with extensions of ourselves, long-distance tools.  It doesn’t require our bodies to go there.  Not only are robotic space probes doing things science fiction never imagined, but earth and orbital observatories are discovering things about the very origins of the universe that no “boots on the ground” of a cold dead or molten metal planet could ever beat. You can’t travel to the big bang, but through ingenuity we’ve been able to examine the sudden release of photons shortly after it banged (the cosmic microwave background) and learn an awful lot about how our whole universe is happening. 
 
In the hubris of the terraforming and colonizing ideas is a clue to the underlying drive for moving out to other planets. It is the drive for conquest, expansion of our territory, the allure of the frontier. Space is said to be the high frontier.  For years, Captain Kirk and his successors told us it was the final frontier. I’m amazed at how glibly terraforming and space colonization are presented in serious future speculation.  It reminds me of the smart-alecky veterans of Alamagordo, looking up from their slide rules to pronounce that soon nuclear energy would be too cheap to meter.  Colonization of other worlds has come to seem an inevitable goal of humanity. Expansion of the human intellect throughout the universe is seen as an ultimate goal of our species, whether by human beings or human-invested artificial intelligence.10 Sorry folks, but colonization is not a fundamental human desire.  It’s cultural, not uncommon in humans, but by no means universal.  The particular strain of this cultural malignancy at work here is the western tradition of endless manifest destiny, always subduing new lands and whatever natives happen to be in the way, for the right, white, and good.  

And there’s a more fundamental reason why space travel doesn’t make sense. The only good reason to escape the earth is that we’re making a mess of it.  Maybe it would be better to clean up the mess.  Because we are inseparable from the earth. Everything in our bodies moves in cycles from the earth into our bodies and out again.  We are part of it.  It is part of us, inevitably and forever.  If it dies, we die, and as much as we’ve learned that’s screaming this at us, we can’t seem to wrap our little heads around it.  

Building our future stories on fantasies of conquest and escape smacks of futility.  Good science fiction was never escapism. It is expansive imagination. It is the wrong time for fantasies of conquest and escape.  Space travel was expansively imaginative in the nineteenth and early twentieth centuries.  Forget it.  Whenever I see yet another story about space travel and space colonization, I imagine Jules Verne yawning.  No, we don’t need to confine ourselves to pristine, eco-friendly little comfort zones or exploring the ocean instead of space.  Plenty of the best science fiction has been focused on technological marvels, and horrors, other than space ships and planetary conquest.  The fertile unknown is all around us, waiting to be explored.  

We are threatening this earth, our milk and our breath, and the more we turn away, the worse it will get.  There aren’t fertile planets and alien civilizations within reach ready for us to conquer and colonize. Space does not beckon us. Fleeing into space will kill us. If we turn our imaginations away from our earth to fantasies of escape and conquest now, when the earth beneath our feet is eroding, acidifying, melting and burning because of our errors, we betray our history and our progeny. There may be no one left to learn the lessons of our errors from their history.


[1] Costs for cargo and crew transportation to and from the International Space Station are about 50% of its annual budget, which varies from $3 to $4 billion per year.  The crew varies from 3 to 6 persons (AUDIT OF COMMERCIAL RESUPPLY SERVICES TO THE INTERNATIONAL SPACE STATION, NASA Office of Inspector General, April 26, 2018).     
[2] Scientific American, October 2018.  See also Gribbin’s book Alone in the Universe: Why Our Planet Is Unique (2011).
[5] See Wired Magazine’s October 2016 article on the Biosphere projects and T.C. Boyle’s 2016 novel The Terranauts exploring what went wrong (https://www.wired.com/2016/10/terranauts-tc-boyle-novel/).  
[6] See https://phys.org/news/2016-11-bad-mars.html. “Prolonged exposure to the kinds of (radiation)levels detected on Mars could lead to all kinds of health problems – like acute radiation sickness, increased risk of cancer, genetic damage, and even death.”  Most planets, including Mars, will not have strong magnetic fields that shield its surface from radiation from its star. 
[7] https://www.space.com/24701-how-long-does-it-take-to-get-to-mars.html. This article discusses current propulsion technologies.  It also mentions possible advanced technologies which can move small space probes, not bulk material carriers, much faster, but with complex technology and huge energy inputs. 
[9] In 1969, when asked by a congressional committee what value for national defense would accrue from basic science funding, Robert Wilson, director of Fermilab, answered that “it has nothing to do directly with defending our country except to help make it worth defending.” (https://history.fnal.gov/testimony.html)
[10] See Bostrom, N., Superintelligence – Paths, Dangers, Strategies (2014), p. 101 and ff. and Kurzweil, R, The Singularity is Near (2005).

Tuesday, August 28, 2018

Real and Artificial lntelligence


BOOK REVIEW: Common Sense, The Turing Test and The Quest for Real AI by Hector Levesque (MIT Press, 2017)

Levesque’s lucid and brief (156 pages) book is an elegant and timely antidote to the overblown hype, snaky language, and grandiloquent assumptions lacing the popular writing on AI.  It’s surprising that much of this overwrought writing is by bona-fide computer science and other prominent professionals.   Levesque uses very little jargon and carefully defines for the general reader what specialized terminology he does introduce.  Going flat out on accessibility, he even makes a point of using almost no math in the entire book, but can’t resist a brief description with simple algebra examples of how math actually works in computing machines, demystifying it a little. 

A major theme is that the current exclusive focus on what Levesque calls adaptive machine learning (AML) in image recognition, self-driving cars, medical diagnoses, and similar applications typically using neural networks, has severe limitations.  It is basically “training on massive amounts of data,” which is a radically different approach than the “good old-fashioned AI” (GOFAI) of the past several decades, which attempted to emulate thinking.  GOFAI sought “common sense,” which quote references an oft-referenced paper by AI pioneer John McCarthy, “Programs with Common Sense,” from 1956, the year of the famous meeting at Dartmouth by some foundational early thinkers about AI (McCarthy, Allen Newell, Herbert Simon, Marvin Minsky et al).  GOFAI was much more concerned with using language, symbols, and knowledge to compute solutions than the current emphasis on machine learning from training on big data sets.

Levesque pays great attention to the question of what we mean by intelligence in ourselves and in machines, distinguishing him from the run of AI savants these days.  Levesque, unlike almost all other AI writers, actually gives a definition of intelligence: “People are behaving intelligently when they are making effective use of what they know to get what they want” (p. 40).  This puts the emphasis on knowledge, stressing the requirement that ability to deduce from knowledge not directly related to a subject under consideration is a critical component of intelligence.  Intelligence handles the unexpected.  It does so through its ability to bring a wide array of knowledge to any problem.  This is “common sense.”  The “knowledge representation hypothesis” is derived from Leibniz and explicated by philosopher Brian Smith (pp 119 – 122).  It’s basic implication for AI is summarized in three lengthy bullets, which I with temerity abbreviate here as:
- An intelligent system must have an extensive knowledge base, stored symbolically.
- The system processes the knowledge base using logical rules to derive new symbolic representations that go beyond what was explicitly represented in the knowledge base.
 - Conclusions derived from the above drive actions.

Levesque shows how AI systems designed based on this representation will be more likely than AML systems to be both reliable and predictable.  To Levesque, reliability and predictability are essential, and the AML approach de-emphasizes them in favor of achieving a preponderance of positive results.

But Levesque doesn’t say that the knowledge representation is all we need, or that AML doesn’t have a place in AI.  He introduces the concept of “The Big Puzzle,” which is that intelligence has many aspects that we don’t completely understand, like different parts of a jigsaw puzzle that we have to solve separately and then bring together.  Parts of the big puzzle include language (symbolic representation), psychology, neuroscience, and evolution, among others.  He notes that one large difficulty in solving the big puzzle is reconstructing a process from its output.  This is inherently difficult, and he demonstrates the point brilliantly with a description of a very simple computer program (introducing neatly some basic algorithmic concepts in the process).  When you see the output, you get his point about how difficult it would be to determine how the program worked, just by examining the output.  Similarly, watching a bird or an airplane fly gives little clue how to design a flying machine. His approach is to take “the design stance,” which he describes by example of the Wright brothers’ approach of understanding in general the processes needed to lift an object moving in air and designing the most practical way to achieve it, rather than learning to fly by imitating birds, which was tried and tried and never worked. He’s saying that neural nets are like trying to fly by imitating birds.  The analogy is limited because neural nets clearly have achieved impressive results, but the results are more like effective data processing rather than what we would call intelligence in a person.   

The Turing test has come to be seen as a valid way to judge whether a computer exhibits human-level intelligence.  Its basis is that intelligence can only be judged by “externally observable behavior,” i.e. its results.  Levesque has an issue with the Turing test.  It requires that the computer be judged on is ability to “fake” human reasoning and common sense, rather than meeting a more objective standard. 
 As a way to overcome the shortcomings of the Turing test, Levesque points out the value of (and has done a lot of research on) Winograd schemas in testing AI systems.  Winograd schemas have simple right and wrong answers. They are statements with a specific but ambiguous pronoun that requires background knowledge or “common sense,” outside the specific facts under consideration, to decide between two specific alternative understandings. Example:
“The trophy would not fit in the brown suitcase because it was too small.”  What was too small?
- the suitcase?
- the trophy?

Levesque questions whether general intelligence AI may even be a worthwhile goal, and notes the trend in actual AI development is for specialized intelligent systems for assistance with specific tasks vs human-level general AI.  He notes that even chess-playing programs are not treated as competition by human players, who today use them for practice and advice in human-to-human competition, which is what they are interested in.  He’s doubtful that anyone will find artificial general intelligence worth paying for.  He also thinks the danger of an autonomous “singularity” taking control is overblown, and shouldn’t be considered on a par with other societal dangers like pollution and overpopulation.  He gives little credence to the idea of superintelligence happening spontaneously or by accident: “Inadvertently producing a superintelligent machine would be like inadvertently putting a man on the moon!”  Autonomy is the real risk, not superintelligence, and autonomy can be carefully controlled.  (Don’t let cars self-drive, for example.  Airline pilots use auto-pilot, they monitor and maintain control over it at all times, never ceding autonomy.)

I’ve been surprised that most of the writing over the past decade on AI is, unlike Levesque, really fuzzy about what we mean by “artificial” intelligence and “general” intelligence, but then goes on to use those terms extensively and as central topics.  If we are going after human-level or “general” intelligence, which these authors commonly assume, or if we’re afraid it may arise spontaneously from our machines, then it seems important to have at least a working definition of intelligence to focus on.  Defining the goal, or the threat, seems an essential first step.  Human-level intelligence is usually considered the model for “general” intelligence, implying that human-level intelligence is something we understand well.  Sometimes there is a discussion of why this is assumed, sometimes involving a discussion of the difficulty of defining intelligence and/or general intelligence, but then concluding that human intelligence is the one we know, so it’s the best reference we have. But how well do we really know it? The idea is then extended to the human brain being the only example and thus model of a general intelligence in nature, eliding a discussion of what it is that makes the human brain uniquely intelligent.  And how?  Exactly how?

It appears the target for general intelligence keeps moving.  When I was a teenager in the 1960’s, it was said that if a machine could be taught to play chess well enough to beat humans, it would be convincing evidence that it had achieved a human level of intelligence.  That goal has been exceeded, but no one is saying that Deep Blue thinks, like a human or otherwise.  Things touted as intelligent behavior at one time are called something else after a machine does them.    

It seems like what is called AI these days by people who are actually making it, rather than writing about it (sometimes the same people in different roles), doesn’t at all aspire to general intelligence.  Instead, its goals are lesser things like greater efficiency in image and speech recogniton, automatic car navigation, unbiased medical diagnostics and similar goals for specific products.  The “intelligence” is usually that the algorithm is deeply heuristic and “learns” or is programmed to try different approaches to the specific problem in order to optimize a solution.  Yet many of the books and commentary on AI (For example, Superintelligence, 2014, by Nick Bostrom, and The Master Algorithm, 2015, by Pedro Domingos, both valuable reading) seem to assume that AI is leading to something akin to “general” human-type intelligence, either by intention or by accident.  Early this year, “Deep Learning: A Critical Appraisal,” (https://arxiv.org/abs/1801.00631) by NYU professor of cognitive science Gary Marcus drew much attention by pointing out that the neural net-based deep learning approach appears to be “approaching a wall,” and “must be supplemented by other techniques if we are to reach artificial general intelligence.”  The analysis is impressive, identifying 10 specific shortcomings, all of which are things that seem worth considering in conceiving of what “general intelligence” might be.  But the ten things don’t add up to anything like a complete description of what general intelligence is.  There were 10 blind men and an elephant…

There seems to be a gap here, that is widening, between what the practitioners are actually pursuing vs a fuzzier but widely assumed goal of general intelligence as not only worthwhile but the ultimate goal.  One could conclude from this that the goal (or threat) of artificial “general” intelligence at “human-level” or above is proving to be a chimera (or a false threat).  But it could also be true that more clarity around what we mean by “general” and “human-level” intelligence in a machine would go a long way toward helping us see the real value of deliberately pursuing a more general artificial intelligence, and how to achieve it, as well as the real danger of its threat.