Ghost from the Machine

January 8, 2009 - 3:32 pm
Irradiated by LabRat
Comments Off

Ever since we began figuring out that we could put things together and get something useful out of the parts that we’d combined- from the clovis point to the iPhone- we have gone about the business of figuring out how new things work by taking them apart and studying the parts to see how they fit together. Presumably, with enough studying and experimentation, this will reveal How It Works; most of the time, this turns out to be more or less true. We can indeed discover useful things about the smallest components of the universe with a Large Hadron Collider, and taking apart a pocketwatch will indeed let you know how the clock works- assuming you’re of sufficient skill to put the thing back together before the owner of the watch comes home and insists there are more pressing matters afoot than mechanical engineering.

Mankind ran up against its first real dilemma between science (such as it was) and philosophy/theology the moment the discovery was made that if you take apart something living, it is impossible in any fashion to put it back together again and come up with a living thing in the same working order. Once you’ve finished taking it apart, assuming you’ve done a thorough job, what you’re left with is really more of a pile of meat than it is a “dog” or “man”- and it’s impossible to restore to the original state. Something ineffable has been lost- and while we’ve tried over and over to pin that quality on a particular body function, there always seems to be a case where it no longer applies; a person whose breathing and heartbeat have stopped may be restored to life (with more than a little difficulty), a person who is breathing and beating along just find may never return to consciousness (and therefore behave as anything other than meat that is still breathing) again. We can pin it on no part of the brain or identifiable interaction of those parts. From this uncomfortable dilemma came Descarte’s model of dualism- “solving” the problem* by placing mind in a separate category defined as being separate from and fundamentally opposed to the meat. Philosopher Gilbert Ryle derisively described it as the “ghost in the machine” scenario- a term that’s come to have a lot more lasting power in culture than Ryle has. The essence of what makes a person a person can’t be found by taking the person apart, therefore it must be outside and immune to natural law- or so it was believed must be the case for centuries.

I promised earlier that this would have some relation to evolution, didn’t I? Don’t worry, it will.

As we proceeded beavering along taking things apart and putting them back together again and making notes on what happened, we started to notice more and more things that simply were not comprehensible in any sort of useful form. Life, which has always been intractable in this fashion, was no surprise in the sense that there has so far always been a vast gulf between what we could discover by taking it apart (anatomy and genetic sequencing including) and what we can predict of how it will behave, but other things were getting to be more and more worrying. Economics follows no top-down nor bottom-up simple series of behaviors no matter how hard we redouble our efforts to either dissect and model it from the bottom up, or impose such an order on it from the top down. Weather** is notoriously difficult to predict with more than knock-wood accuracy, and it is only with satellites actually watching what the weather does from moment to moment that we can predict it from… well, hour to hour, that ten-day forecast is still mostly guesswork based on past patterns of behavior rather than a strong prediction of what will happen down the line as a logical progression from right now.

For a long period of time, it was more or less tacitly assumed that all of these intractable systems- weather, behavior, evolution, traffic, history- would eventually become, given enough innovation and enough computing patterns to make really REALLY rigorous models, things that could be neatly understood from the top down and the bottom up in that disassembled-clock way. It was assumed that the problem was that these subjects were simply too complex, as though there were some sort of neatly linear universal scale of Simple to Complex and we’d be right along making accurate predictions and past inferences about the complex things just as soon as our models reached the requisite levels of complexity. When we got our hands on computers, in which a human programmer could define all the rules of the program’s “universe”, it was a tremendous gift to all the modeling of the world we’re living in- it made the math all so much easier, as the computer never got bored working out the next step of the equation. Once a few clever people got around to playing with this concept, a few people noticed something interesting: it was possible to define an extremely simple set of rules and get results that were impossible both to predict from the bottom up of starting conditions and to infer from the top down of ending conditions. The only way to explain what the program had done was to explain all ten billion or so steps it had taken in sequence, which was no explanation at all. Langston’s Ant- a simple computer program with extremely simple rules that will exhibit the same overall patterns of behavior despite randomized initial starting conditions and for no reasons that can be worked out in a reductionist fashion- is probably the most famous example.

Of course, this wasn’t really the first time we knew this; the first person to invent a game with very simple rules that could be played to the endless entertainment of its players- such as chess or Go- was using this principle. Even though the rules are simple and the number of things the pieces can do limited, the outcome of a chess game is impossible to predict from initial conditions, or from conditions a few moves in; likewise, it’s impossible to work out how the early phases of the game must have gone from the final board, or the board a few moves from the final one. In order to have any sort of predictive or explanatory power***, you have to forget about working things out from sequences of moves. Instead, you must analyze in terms of what’s been gained in human knowledge of patterns and strategies used by the game players that are known from observations of hundreds or thousands of games between extremely skilled players. You have to talk about the features of the game that have emerged from play and how that shapes the outcome, not the rules. The rules explain nothing except why the horse-shaped ones move in such a funny way. But since the person who wrote the rules of the game, and the players of the game, were intelligent, the implications of this didn’t particularly stand out- of course there was complexity and purpose involved, there were minds involved. The significant thing about Langston’s Ant and other programs produced by people curious about emergent behaviors were that the creator of all rules in the program’s world had no purpose in mind but to define the rules- but defined patterns that could not be worked out stepwise as a logical consequence of the rules emerged anyway. Naturally, the fact that this was Math made the whole thing now worth serious consideration.

I’m going to back out of story mode for a moment and explain one of the reasons that biologists and physicists trying to philosophize at each other tend to get along like cats in a sack, which is the role that “randomness” plays in their respective fields. To a physicist, there’s always an element of randomness in his systems, but it’s a weak force and tends to be irrelevant to the outcome of the final calculations if there’s a working system to be described at all; this is why spherical-horse models are just fine for physics but tend to fail in unexpected and sometimes dramatic ways when applied to emergent systems like economics. To a biologist, random elements are why he can’t make any models at all that don’t need to be footnoted with “This describes extremely general patterns and there is no actual population of organisms on Earth that will actually be guaranteed to behave in a way numerically predicted by this model. In fact, the subjects of your experiment will probably do something entirely different and you’ll have to spend another two years in grad school because of it”. One is a world where the rules are the most determinative and therefore important thing; the other is a world where the features that emerge out of the rules are****.

The thing the players in a game of chess, a population of organisms, and Langston’s Ant all share- on varying scales depending on the number of rules and variables involved- is that their decisions, and therefore their next course of action, hinge far more on the exact conditions they find themselves under now rather than what they did previously. These kinds of feedback-determined actions are what tend to create emergence- and a rapid departure from mathematical predictability.

To get back to physics versus biology, take the case of Brownian motion: to a physicist, Brownian motion is merely a mathematical way to describe the way a pattern emerges from the random collisions of molecules in a gas or a liquid. Brownian motion simply represents an example case of random stochastic system; it’s not likely to jump up and screw up his equation, because the random element isn’t going to change anything about how the system overall will behave. It just happens to be why things tend to diffuse in the overall pattern they do, which will change in immediate predictable ways as soon as an outside force is imposed. To even the very simplest form of life, however, Brownian motion has a tremendous impact, because one of the earliest believed adaptations of any single-celled life form is designed to exploit it.

Bacteria employing chemotaxis have exactly two states of random movement: swimming in a straight line, or tumbling around in a rotational fashion. The bacterium itself has no particular purpose to its movements, but with basic abilities to sense favorable chemicals (food) and repellent (poison), it does have two simple rules- if it senses “good” chemicals as it rotates around, it enters the straight-swim pattern toward it (or away if it’s “bad”), and if nothing in particular is coming its way through the Brownian currents, it re-enters the tumble phase until something does. A version applied to home-made robots creates a very effective light-seeking robot that will follow beams and spots of light with amazing accuracy- and it’s all a bacteria needs to navigate chemical gradients to find food and avoid danger despite being almost entirely at the mercy of randomness. The bacteria will still find food and avoid danger regardless, but the exact patterns it creates as it tumbles and swims are random and tell us nothing- even if we know the positions of all the attractive and aversive chemicals in the liquid and the starting positions of the bacteria. The randomness inherent in Brownian motion makes sure of that. In fact, the system is so useful that it crops up all over the place in life forms- including developing fetuses, in which chemotaxis is part of how differentiating cells find their way to their proper place in the organism. It’s also a very common programming exercise for beginning students. Complexity- foraging and avoidance behavior- from just two simple rules.

This is why the “randomness” inherent in evolution is why it can work at all, not a problem for life to overcome. If there were a single logical, optimal way to react to the environment in which organisms found themselves in, we wouldn’t have anything more than one kind of extremely boring organism, like a green film over the entire globe. Instead, there are elements of randomness that provide emergence- there are many, but the two most important are mutation and environmental change. As it turns out, the process of DNA replication is really quite inaccurate; if the DNA were a conservative construction only containing the elements absolutely necessary to make another functioning organism, this would be a catastrophe, but as one of the most common errors in replication is to copy the same gene twice (to say nothing of the crap that viruses leave behind as they make their merry way), one of the more interesting shakeups in evolutionary biology in the last forty years or so is the realization that most mutations are neutral in overall effect. If you have two copies of the same gene for the same thing, then a mutation in one is a much smaller problem- and might, unexpectedly, prove to be a benefit.

Between DNA sprawling everywhere from copy errors, deletion errors, viral junk, and the predilection for most forms of life to reproduce madly (putting all resources into replicating with none at all to making better odds for offspring), a tremendous sprawling amount of variability is left for the twin forces of environmental chance (unpredictable changes) and natural selection to work with. This gene doesn’t do anything useful now? It might if things get really dry. Still no? It might in the next generation if you have sex (or trade genes, which bacteria do in other ways) with that other organism over there, and this gene is in the same genome with that one. This is why, when you sequence the entire genome of an organism, a lot of it looks like “junk”- that is, no use for those genes is known. Sure they’re not known- the organism doesn’t exactly have a wide range of potential behaviors in a laboratory. Its genome is for tomorrow’s drought and last millenium’s ice age and these possible matings and this pile of scraps- not a slim, purpose-driven tool for swimming in a beaker. (Or programming at IBM. Whichever.) If anything, the great mystery of evolution (which Stephen Jay Gould tried to address, with varying levels of success, in Wonderful Life), is not why life is so diverse, but why it’s not much more so.

This is why “irreducible complexity” is an irrelevant argument: while a system in a given organism may need all its parts to work properly, that wasn’t necessarily true one billion years ago and it won’t necessarily be true in two hundred million. Organisms aren’t like mousetraps, with each part carefully engineered to a specific purpose; they’re more like an auto shop with a junkyard in the back. Just because the guy over there is currently welding a piece of metal into a hood scoop doesn’t mean welding is all he ever did, does, or will ever do, or even that the piece of metal he’s working on was originally ever part of a hood. The kid sweeping up isn’t necessarily going to be the kid sweeping up forever- he may not work out, or he may turn out to be great at ad-hoc electrical work. Sure, genes aren’t intelligent- but as we’ve shown, you don’t need intelligence or purpose for a system of simple rules that depends on conditions to create complexity. Just because you can’t work out stepwise from finish to start how something happens doesn’t mean it couldn’t have happened- hell, you don’t even need a currently active system for THAT; it wouldn’t even be possible to work out exactly how a sculpture looked starting from the pieces after it had been blown to hell. Irreducible is NOT the same as unproducable.

Mind is not a mysterious black box disconnected from natural law, in this context- it’s the collective result of billions of years’ worth of evolved tools for coping with emergence and becoming better at it. Sensory information? That’s basic. All you need is the processing power and a few basic heuristics- starting with something as simple as chemotaxis and gaining complexity with the complexity of the organism and its needs. Instinct? Now we’re getting more complex- these tend to be quite prone to evolutionary pressure, as no situation is really all that simple and predictable, thanks to all those emergent factors in the environment, which by fairly early in life’s history had come to consist mostly of other organisms. Even a snake’s strike is still modifiable in distance and amount of injected venom after the strike fires, and that constitutes less than a second of time for potential wrinkles to crop up in the situation. Predators and prey animals provide strong positive feedbacks on each other’s behavior- and instinct rapidly becomes coupled with (and its stereotyped forms subordinate to) learning and memory, because they’re simply so much more responsive to the emerging landscape of “now” than reflexive actions are.

Consciousness? All social animals have some small spark of a sense of themselves as individuals and others as different individuals- otherwise the complexities of social interaction simply wouldn’t be possible. The more subtleties those interactions take on, and the greater consequences they have (getting to mate or not, succeeding at hunting or not, succeeding at raising those ever more expensive young or not), the more the system’s feedbacks drive themselves inward and inward to more self-awareness, better ability to plan, more motivation to figure out whether others will help or hurt you, stronger drive to fake it or figure it out if someone else is… oh, and in the meantime all of these things have to be coordinated with what’s incoming on the sensory radar, what your ancestor the protosimian would have done in this situation and sorting out whether that’s a useful response right now, and the hypothetical mind we’re talking about hasn’t even gotten to imagination yet. And the more complex each individual mind gets, the more distinct from one another they become just because the emergent factors of their formation will be so inevitably different. Again, the question begged is not why our minds (the super-duper-charged version of mind so far produced, caused by one reckless species putting nearly all its evolutionary investment in it) produce such diverse things, but why we are not MORE diverse.

The short answer is that there are still rules that shape the features, and they do serve as constraints, and there are only so many different kinds of landscapes on Earth, and they do share certain rules that are more important than others, and we are still bipedal apes- but I’m not going to go into an attempt at a long answer now. I’ve already burned enough words for now. For a different look from a more physics-geeky perspective, go thee now over to Exploded Diagram, where a friend of mine is making his blog-debut. These were really meant to be two halves of the same thing, so consider it required reading.

End note: If anything in this seems like a genuinely new idea, you can probably credit it to the books of Ian Stewart and Jack Cohen, who were 90% of the inspiration and, to the extent any of these concepts are truly novel, what I’m trying to distill. The remaining ten percent was arguments with physics majors and snarling “THAT IS NOT HOW THAT WORKS” at Discovery Institute propaganda.

*The practice of solving a problem by redefining its terms until there’s no longer, on paper, a problem is another traditional human practice that is currently at its most robust in the halls of government.

**Which is a very different thing than climate, as a lot of aggrieved climate scientists who do not want to hear one more damn weatherman joke would like me to point out at this juncture. What a single person buys and sells in a week is weather; the Dow Jones Industrial Average is climate. There are also alternate definitions.

***If this pairing of phrases sounds familiar to you, they’re what define a useful and robust scientific theory- explanatory and predictive power are both inherent in the theory.

****Except, apparently, for quantum. There is arising a horrifying possibility that classical mechanics are an emergent behavior of quantum mechanics. Horrifying to (most of) the physicists, anyway; we biology types think it’s hilarious. Maybe it’s not particles all the way down after all, eh?

No Responses to “Ghost from the Machine”

  1. Alan Says:

    This is really too good for a blog.

  2. SnarkyBytes » What the hell are YOU anyway? Says:

    […] has an excellent post on the rules that make you, […]

  3. Steve Bodio Says:

    You really need to get paid for this- Zimmer ain’t half as good though better connected.

    Have you read the chess novel Queen’s Gambit by Walter Tevis? you’d like it I think.

  4. bluntobject Says:

    On the subject of “why isn’t (biology|human behaviour) more complex?”: I have a vague idea coming from an example from M. Mitchell Waldrop’s book “Complexity” (which I can’t really evaluate for usefulness; it’s pop-sci that seems vaguely plausible from what I know of emergent behaviour in computer programs). His example was different starting conditions in Conway’s Game of Life: if you start with too much order, everything dies in a few iterations. If you start with too little order, everything dies in a few iterations. If you start with “just the right” balance, you end up with anything from alternating states to glider guns. (There’s probably a similar metaphor involving Nash equilibria.)

    Maybe a “more diverse” state would tend to diverge too far into unsustainable behaviour and unselect itself. (Note: I’m not sure that I’m making sense.)

  5. Nortius Maximus Says:

    Maybe this is why I’m an ex-Physics major, but the idea that classical/Newtonian physics emerges out of quantum phenomena seems kind of de rigeur, and not horrifying at all. Boltzmann’s statistical mechanics was horrifying to his contemporaries (Ernst Mach being the famous biggie), but that was a century ago.

  6. Nortius Maximus Says:

    Oh, and Ostwald of course. /HistoryOfScienceNerdMode=off

  7. Christina LMT Says:

    I feel like I just took a class. A very good one, where about half the information caused a breeze to ruffle my hair as it sailed past above my head!
    Same thing happens when I read hard SF.

  8. Kristopher Says:

    Physics is an emergent condition of Quantum Mechanics? Nope, too cautious there.

    All of Causality is an emergent condition of Quantum Mechanics.

    Causality gets “violated” every time someone does a spooky interaction at a distance experiment, and gets real data from outside their current light cone.

  9. SmartDogs Says:

    Those parallels between life, weather/climate and economics help explain how we got the mess we’re currently in without having a convenient scape goat to pin it all onto.

    It also illustrates why its pointless to look to that bunch of dimwits in Washington to solve the problem.

    PS I loved Figments of Reality. Thanks!

  10. LabRat Says:

    Steve: thanks for the kudos, though like I said I’d have no idea where to begin- though if I made some sort of effort I could probably figure that out… Also, I had no idea there was such a thing as a chess novel, although given that I’ve seen murder mystery series based (repeatedly) off dog training, gourmet cooking, and home repair, I really should have.

    Blunt: Something like that. (I considered citing Life instead of Langston’s Ant, but I went for the one with the very simplest rules.) What you said is almost certainly part of it, but also a way I never would have thought to put it; the direction I would have gone off in would have been more about the constraints imposed by earlier designs and the way the environment itself guarantees certain things- you’re never going to find creatures in a liquid environment that aren’t shaped to go along with fluid dynamics, for example.

    Nortius: My impression of physicists and emergence has been based partly off heated arguments with relentlessly reductionist physics/engineering types, and partly off the history of science (“God does not play dice with the universe”); it’s entirely possible that the actual mature physicists are all-a-giggles over the notion.

    Christina: Thanks, I think?

    Kristopher: I LOVE that metaphor.

    SmartDogs: Ah, I knew you would. Next up for me is their previous book, Collapse of Chaos. And you’re depressingly right about the rest. Our fearless leader’s approach to economics springs to mind… both the incoming and the outgoing, sadly.

  11. aebhel Says:

    I agree with Christina. It’s actually kind of awesome, because I dropped out of science classes after tenth grade and haven’t touched the subject since, but you can write about it in a way that engages me even though I don’t actually understand more than half of it.

  12. Son of Grok » Blog Archive » Weekly Summary 1/11/09 Says:

    […] calories. 2. If you feel like getting all sciency and philosophical, Atomic Nerds blew me away with this article that hurt my brain. 3. Conditioning Research talks about some research further supporting interval […]

  13. Eric Hammer Says:

    Fabulous post!

    When it comes to complexity and economics, I highly recommend “Origin of Wealth”. The book focuses complexity theory in economics with the first quarter being about how economic thought went from static systems from borrowing physics concepts just before entropy was codified, to more dynamic ways of lookingat things as emergent properties. Really a great book, well written and chock full of fascinating.

  14. bluntobject Says:

    To save anyone else the google search, here’s the link:

    Beinhocker, Eric D: Origin of Wealth: Evolution, Complexity, and the Radical Remaking of economics.

    Thanks for the tip!