(2023-07-21) Marick A Circle-centric Reading Of Software Development Through The 1990s, Plus Screech-Owls

Brian Marick: A circle-centric reading of software development through the 1990s, plus screech owls. Welcome to Oddly Influenced, a podcast about how people have applied ideas from outside software to software... Bonus episode: a circle-centric history of software engineering and the agile counterreaction, plus screech owls.

This series is about Michael P. Farrell’s 2001 book /Collaborative Circles: Friendship Dynamics and Creative Work/

I’m going to tell the story of one particular kind of collaborative circle: the loosely connected network of teams that created what were then usually called “lightweight methodologies”. In many ways, I suspect that network was similar to the history of first-wave feminism in the United States, which also featured a number of groups, geographically distributed, who got together at periodic intervals at conferences. (Scenes, Collaborations, Inventions, And Progress)

For now, what I want to do is focus on some sort of generic pre-Agile team who felt trapped in an orthodoxy that was clearly past its prime but was still clinging relentlessly to what the team felt to be an outmoded vision. How did that feeling get sharpened into particular critiques? How did that reaction generate a different shared vision

I was not a member of any such team – I was at best a peripheral observer

I’d date the “status quo” period in software as beginning in 1968, at the famous first NATO conference on Software Engineering.

Proper software development was conceptualized as the creation of a set of logically-related documents.

First, a requirements document.

The specification is about “what”.

The point is that the specification is supposed to “satisfy” all the requirements, as determined by some person or committee with authority.

The “how” is handled by some combination of design documents and code.

In 1981, a lot of systems were still written in assembly language. The jump from the section of the specification describing what the square root button should do over to the particular machine code that does it was often considered too big, so there was a detailed design in between that described the “how” at an intermediate level of detail. You may have heard of flowcharts or pseudocode.

After the detailed design was finished, it would be hand-translated into assembly code.

As always with humans, that dry process got tangled up with pre-existing moralism and the creation of new morals for people to follow and believe

let me spice up the morality part of the common vision by starting with a digression.

The Alphabet of Ben Sira, written somewhere between the 8th and 10th centuries CE, contains some history that didn’t make it into the canonical version of Genesis.

He then created a woman for Adam, from the earth, as He had created Adam himself, and called her Lilith. Adam and Lilith immediately began to fight. She said, ‘I will not lie below,”

Lilith then flies off and, in some interpretations, turns into a screech owl. See the show notes if you’d like some merchandise with a drawing of a pissed-off screech owl speaking a word bubble that says “I will not lie below.”

it gives me an excuse to describe a theory of how moralism infected software engineering. It’s based on the observation that people seem really fond of binaries like up vs. down, left vs. right, man vs. woman. (cf is-ism)

Even in cases where you could describe a binary opposition in neutral terms, dominance has a way of creeping in. (power, status)

All this is terribly silly, of course. But we’re a silly species.

It’s natural to think of the specification as dominant, because the code has to “satisfy it”

“What” is better than “how”, partly because the “how” necessarily contains extra details but also because “eternal” is better than “bound by time”.

Does all this mean I think some engineer in Hewlett-Packard, writing the specification for a calculator in 1985, thought he was capturing the true Platonic Form of the calculator? No. But I do think he likely thought of himself as doing work that is better than programming.

As an attitude, it didn’t sit well with some programmers. As Pete McBreen once said, “the Agile methodologists are methodologists who like to program”.

Farrell notes that collaborative circles typically attract people whose enthusiasms or very selves are looked down upon by the status quo.

rejecting these made it easier a think of, and accept, test-driven design, which deliberately mixes up the inside and the outside

Now I want to look at a major consequence of the logical-document approach: a particular attitude toward change.

Suppose a project is nearly done, and you discover a single requirement is wrong. You have to correct the requirement, and in such a way that you don’t inadvertently break other requirements – like having an old requirement and the corrected one be contradictory. Then you have to correct the parts of the specification that can be “traced” as downstream from the corrected requirement. Then there might be design documents that had to be updated, and that single upstream mistake might mean changes in many scattered chunks of code. That could all be very expensive. In contrast, a mistake that’s just in the code is much cheaper.

a mistake that’s corrected in the requirements document before any downstream document is created will be cheap to fix.

you get the notion that you should try really hard to get an upstream document right

In practice, reducing errors in two documents – the requirements and specification – split off from reducing errors downstream. We ended up with two different constellations of techniques.

Few people actually thought you could ever achieve the ideal of never revisiting a document once it had been reviewed and approved. But many thought you could get closer to the ideal, and that failing to try was something of a moral failing.

The result was a shared vision that change, in the context of a software project, is a bad thing

The proto-Agile people had two problems with this whole system of thought.

First, as programmers migrated into commercial software, be it software for sale (like Excel) or for internal use (like financial trading software), the idea that major changes would be avoided by Thinking Really Hard began to seem increasingly absurd.

Rather than thinking change is bad, what would happen if we thought change is inevitable?

Second, the proto-Agilists agreed change is expensive – sometimes. But sometimes it’s not. Methodologists with their “cost of change curves” were dealing with averages.

Programmers encountered code that was, surprisingly, not hard to change. Code that seemed… practically poised to accommodate new requirements. What made that code special? Could we learn to make such code more common?

The more unexpected changes the team has to cope with, the faster it and the software can be “tuned” to the typical changes the software owners request. That is the explicit goal of the techniques described in Kent Beck’s 1999 /Extreme Programming Explained: Embrace Change/.

There was a final shift. I don’t remember if it was explicit or implicit in Beck’s book, but the next extension to the emerging shared vision was “If it hurts to do it, do it more often”. That, I think, marks the most radical difference between what came before Agile and what came with Agile.

it’s undeniable that the pre-Agile status quo of document-centric development has been disrupted, that Agile has made it feasible for teams to operate in a new way

the collaborative-circle-like rebellion against the document-centric status quo is historically meaningful.


Edited:    |       |    Search Twitter for discussion