(2023-10-12) Marick The Offloaded Brain Part1 Behavior

Brian Marick: The offloaded brain, part 1: behavior. The theme of this series is that we think too much. We’d be better off if we more often arranged for our environment to push us around. Be thoughtful a few times so you can be thoughtless the rest of the time.

I’m going to draw mostly on two books published in 2011: Louise Barrett’s Beyond the Brain: How Body and Environment Shape Animal and Human Minds and Anthony Chemero’s Radical Embodied Cognitive Science. I also make some use of Andy Clark’s 1997 Being There: Putting Brain, Body, and World Together Again, as well as other references you can find in the show notes.

In this episode, I’m going to look at at animal and human behaviors that don’t seem particularly intelligent, such as catching balls and judging distance

To keep this episode from running on too long, examples of that will be in the next episode

In the episode after that, I’m going to talk about behavior that at least seems more intelligent, like planning and learning and making models of the world.

I’m going to talk about certain animal behaviors that solve specific problems. All those behaviors evolved, and it’s really hard to talk about evolution without sounding like it has a goal, that there’s some designer who’s consciously setting out to solve a problem. So I’m not going to try. In fact, I’m going to lean into it and pretend that you are that designer

A problem with the books I’m drawing from is that they’re about a single variant of cognitive science that has two names: either “ecological cognition” or “embodied cognition”. That bugs me because “ecological” shortchanges the role of the body, and “embodied” shortchanges the role of the environment. This is a field that’s all about the body, environment, and brain as a tightly coupled system, so neither name really fits.
Therefore, I’m going to address you as a EE, reminiscent of both “ecological” and “embodied”

your major constraint is that neurons are ridiculously expensive. If you’re sitting right now, as opposed to listening to this podcast while running a marathon – something I understand is very popular – your brain is consuming 20% of your body’s energy, despite being around 2% of your body weight

will want to solve it using as few neurons as possible.

Mainstream cognitive science is inspired by the generality of the stored-program computer, the Turing Machine, lambda calculus, pick your favorite conceptual device that runs algorithms. Part of that inspiration includes what, in some object-oriented designs, is called a God Object: that is, the single object that knows everything about everything, so that any problem can be solved with code that uses the information already available from the God Object, just in a new way.

EEs like you, however, are inclined to think that God Objects are too expensive. Instead, there will be a larger number of objects that represent just enough of the world to solve a certain class of problems

what Brazilians call “gambiarra” and we English speakers call kludges.

It’s high time for an example: stick insects. They have six legs, and walk. The legs have to move in synchrony or else the insect will keep tripping over itself.

there aren’t any such neurons. The legs of stick insects are structured and physically interconnected in a way that makes coordination automatic.

Your own legs are more capable than stick insect legs, but they’re similarly autonomous. The bones and joints and especially springiness of human legs means that, once started stepping, the next step is both automatic and efficient.

Our first design principle is “Favor direct control links from perception to action.”

*from Ron McClamrock, who describes how flies launch themselves into the air thusly:

“[Flies] don’t take off by sending some signal from the brain to the wings. Rather, there is a direct control link from the fly’s feet to its wings*

That isn’t to say that there’s no coordination in direct control. If you try to stomp on a cockroach, it will start turning to scurry away within 58 milliseconds

So you, as a EE, might add neurons to cope with necessary complexity, but you’ll always be on the lookout for ways to offload work onto the body. Crickets make a nice example.

Crickets, like you, have two ears on either side of their body

crickets (as far as I know) are only interested in one sound. Female crickets want to sidle up to the male that’s chirping the loudest.

two neurons, one for each side. The one with the strongest incoming signal fires first, signaling motor neurons to turn in its direction. So the female turns toward the male, as automatically as a fly starts flapping.

every species chirps at a different pitch. And the female’s tracheal tube is tuned to her own species

the spacing between the syllables is different for each species.

The female cricket’s calculational neurons are tuned to the species’ syllable spacing

I mention this circularity because EEs seem to use time and timing and feedback loops a lot in their solutions, whereas programmers like me want to avoid thinking about time and timing as much as we can.

EEs like you might want to read Steven Levy’s Hackers: Heroes of the Microelectronic Age for inspiration. I also link to a nice talk by Guy Steele. And there’s “the Story of Mel” from a programming generation even older than mine

You might also want to read Ed Yong’s An Immense World. The reason is that, because of the clever hacks, there’s a sense that – when it comes to hearing – a female cricket really does live in a world that contains only crickets of her species. She doesn’t actively ignore other sounds; to her, they don’t exist.

The Estonian biologist Jakob von Uexküll coined the word “umwelt” for that kind of thing: to emphasize that different organisms live in radically different perceptual worlds – as do you yourself. Yong’s book is an exploration of the umwelts of various species.

The next design principle is: prefer composite values over atomic values. Or, in programming terms: avoid primitive obsession

Starting in our algebra classes, we got used to solving problems whose values have types or units like length or duration or weight. Sometimes we combine them, like length and duration to give speed, but as a EE you’ll find that nature uses elaborate combinations you didn’t expect.

Looming is a measurable quantity. You can also define a so-called “composite” variable called tau. It’s the ratio of the size of an image to the rate of change of that size. Tau can be interpreted as the time remaining until a stationary object (like the surface of the sea) is contacted

Gannets don’t care about speed or distance. They just pull their wings in when tau reaches a certain value.

You do something similar with regard to weight.

What the body perceives is the object’s moveability, not its weight

It’s important to keep in mind that our perceptions are tied to tasks or behaviors or activities

I want to highlight the difference between tau and something like walkability or throwability. Tau is an objective measure, like distance. It’s useful to any animal that needs to know the time-to-collision with a large object

Throwability is less broadly applicable

When designing direct control links for your workspace, you as the EE will find many more tasks that need big messy, partially subjective variables than tidy, objective measures. Metric

The previous two principles together imply a third that’s worth stating explicitly: discover or create affordances.

an affordance is an opportunity for behavior

Every time you’ve pushed on a door instead of pulling on it, some designer has afforded the wrong behavior.

another example, consider walking or running. As you move, you pick up affordances about the stability of the ground in front of you and automatically adjust your foot placement.

your body has been tuned to detect relevant affordances.

important to keep reminding yourself that an affordance is perceived in the environment, but it’s not a property of the environment. It’s instead a message about the relationship between an environment and a task or behavior.

Moreover, an affordance is a property of the environment, the task, and the body.

Affordances apply to all kinds of time spans

Even more extended is the task of attending to the chance of a predator-like motion in your peripheral vision. You spend every waking moment doing that. You can’t not do it.

When you do that, you’re ready to react to another affordance, that of shapes with left-right symmetry, like, for example, a tiger looking straight at you. In fact, there’s another principle there: Just as perceptions can lead to automatic actions, automatic actions are frequently made to detect new affordances.

Learning will become important when you, the EE, create affordances. You can rely on learning, but only on the types of learning humans are good at.

The final design principle is: maintain invariants and rhythm. An interesting example of this is the “outfielder problem”.

The outfielder problem is: how does the outfielder know where to go to catch the ball?

it appears that what outfielders (and frisbee-catching dogs) do is maintain an invariant. The geometry of the situation is such that if you run in a path that keeps the ball as you perceive it moving in a straight line – both as it goes up and as it comes back down – you will naturally end up in the right place to catch it

Here, then, are your design principles:

  • “Favor direct control links from perception to action.”
  • “Prefer composite values over atomic values. Avoid primitive obsession.”
  • “Discover or create affordances.”
  • “Maintain invariants and rhythm.”
  • “In addition to actions that achieve goals, also design actions that seek new affordances.”

Next episode, some examples of how they can be put to use.


Edited:    |       |    Search Twitter for discussion