(2024-05-08) Matuschak How Might We Learn
Andy Matuschak: How Might We Learn? Your ideal learning environment... Talks about learning technology often center on technology. Instead, I want to begin by asking: what do you want learning to be like—for yourself?
"Design fiction" below: Every time Sam sees a tweet announcing a new result in brain-computer interfaces, they’re absolutely captivated....What if Sam could ask for help finding some meaningful way to start participating? Sam’s excited about the idea of reproducing the paper’s data analysis. It seems to play to their strengths. They notice that the authors used a custom Python package to do their analysis, but that code was never published. That seems intriguing: Sam’s built open-source tools before. Maybe they could contribute here.
- Is this a reasonable scenario? Not bad, if you think about people with some skills/expertise hoping to help solve a Grand Challenge.
- Is the approach below realistic? In the next 10 years, I think No. But I could be wrong, future-progress is hard to predict.
One way to start thinking about this question is to ask: what were the most rewarding high-growth periods of your life?
learning wasn’t the point. Instead, they were immersed in a situation with real personal meaning
secondly: in these stories, learning really worked. People emerged feeling transformed, newly capable
Why can’t we “just dive in” all the time? Instead it often feels like we have to put our aims on hold while we go do some homework—learn properly. Worse, learning so often just doesn’t really work!
These questions connect to an age-old conflict among educators and learning scientists—between implicit learning (also called discovery learning, inquiry learning, or situated learning), and guided learning (often represented by cognitive psychologists).
each of these points of view contains a lot of truth. And they each wrongly deny the other’s position, to their mutual detriment
One obvious approach is to try to compromise. Project-based learning is a good representation of that... But so often it ends up getting the worst of both worlds—neither motivation and meaning, nor adequate guidance, explanation, and cognitive support.
You really do want to make doing-the-thing (learning by doing) the primary activity. But the realities of cognitive psychology mean that in many cases, you really do need explicit guidance, scaffolding, practice, and attention to memory support.
I’ve been thinking about this synthesis for many years, and honestly: I’ve mostly been pretty stuck! Recently, though, I’ve been thinking a lot about AI. (GenAI)
Demo, part 1: Tractable immersion
We’ll explore this possible synthesis through a story in six parts.
Sam studied computer science in university, and they’re now working as a software engineer at a big tech company
every time Sam sees a tweet announcing a new result in brain-computer interfaces, they’re absolutely captivated.
What if Sam could ask for help finding some meaningful way to start participating?
a local AI—can build up a huge amount of context about their background.
what Sam really needs here is something like Copilot, but with awareness of the paper in addition to the code, and with context about what Sam’s trying to do.
This AI system isn’t trapped in its own chatbox, or in the sidebar of one application. It can see what’s going on across multiple applications, and it can propose actions across multiple applications.
Demo, part 3: Synthesized dynamic media
That support doesn’t have to just mean text
guidance includes synthesized dynamic media
Sam doesn’t need to read an abstract explanation and try to imagine what that would do to different signals: instead, as they try different sampling rates, realtime feedback can help them internalize the effect on different signals
Demo, part 4: Contextualized study
Now Sam presses on but as they dig into band-pass filters, the high-level explanations they can get from these short chat interactions really just don’t feel like enough. What’s a frequency domain? What’s a Nyquist rate?
The AI knows Sam’s background and aims here, so it suggests an undergraduate text with a practical focus. More importantly, the AI reassures Sam that they don’t necessarily need to read this entire thousand-page book right now. It focuses on Sam’s goal here and suggests a range of accessible paths
It's made a personal map in the book’s table of contents.
When Sam digs into the book, they'll find notes from the AI at the start of each section, and scattered throughout, which ground the material in Sam’s context, Sam’s project, Sam’s purpose.
As Sam highlights the text or makes comments about details which seem particularly important or surprising, those annotations won’t end up trapped in this PDF: they’ll feed into future discussion and practice, as we’ll see later.
just as our AI guided Sam to the right sections of this thousand page book, it could point out which exercises might be most valuable, considering both Sam’s background and their aims.
Interlude: Practice and memory
if they try to use this material seriously, they’ll probably feel like they’re standing on shaky ground. And more prosaically, they’re likely to forget much of what they just learned.
why do we sometimes remember conceptual material, and sometimes not?
Conceptual material like what Sam’s learning doesn’t usually get reinforced every day like that. But sometimes the world conspires to give those memories the reinforcement they need.
Sometimes you read about a topic, then later that evening, that topic comes up in conversation with a collaborator
By contrast, sometimes when you learn something, it doesn’t come up again until the next week
The key insight here is that it’s possible to arrange the top timeline for yourself.
Courses sometimes do, when each problem set consistently interleaves knowledge from all the previous ones. But immersive learning—and for that matter most learning—usually doesn’t arrange this properly, so you usually forget a lot.
What if this kind of reinforcement were woven into the grain of the learning medium?
My collaborator Michael Nielsen and I created a quantum computing primer, Quantum Country, to explore this idea
After a few minutes of reading, the text is interrupted with a small set of review questions. They’re designed to take just a few seconds each: think the answer to yourself, then mark whether or not you were able to answer correctly. So far, these look like simple flashcards... spaced repetition (SRS)
Systems like Quantum Country are useful for more than just quantum computing. In my personal practice, I’ve accumulated thousands and thousands of questions. I write questions about scientific papers, about conversations, about lectures, about memorable meals. All this makes my daily life more rewarding, because I know that if I invest my attention in something, I will internalize it indefinitely.
Central to this is the idea of a daily ritual, a vessel for practice. Like meditation and exercise, I spend about ten minutes a day using my memory system
But I want to mention a few problems with these memory systems.
I suspect it often leaves my memory brittle: I’ll remember the answer, but only when cued exactly as I’ve practiced. I wish the questions had more variability.
Likewise, the questions are necessarily somewhat abstract. When I face a real problem in that domain, I won’t always recognize what knowledge I should use
Finally, returning to this talk’s thesis: memory systems are often too disconnected from my authentic practice
Demo, part 5: Dynamic practice
Sam did the work to study that signal processing material, so they want to make sure it actually sticks.
Sam can flip through these questions while waiting in line or on the bus. (This is a good point, that liminal-time can be used for activities that might seem low-efficiency otherwise.)
These synthesized prompts can vary each time they’re asked, so that Sam gets practice accessing the same idea from different angles
The widget can also include open-ended discussion questions. Here Sam gets elaborative feedback—an extra detail to consider in their answer.
Demo, part 6: Social connection
Just as our AI can help Sam find a tractable way into this space, it can also facilitate connections to communities of practice—here suggesting a local neurotech meetup
meets a local scientist, and sets up a coffee date. With permission, Sam records the meeting,
Sam ends up surprised and intrigued quite a lot during this conversation. Our AI can notice these moments and help Sam metabolize them
Design principles
Four big design principles are threaded through Sam’s story.
First, we bring guided learning to authentic contexts, rather than thinking of it as a separate activity.
Then, when explicit learning activities are necessary, we suffuse them with authentic context.
Besides connecting these two domains, we can also strengthen each of them. Our AI suggest tractable ways for Sam to “just dive in” to a new interest, and helped Sam build connections with a community of practice.
Finally, when we’re spending time in explicit learning activities, let’s make sure that learning actually works.
Two cheers for chatbot tutors
something really wonderful about language models: they’re great at answering long-tail questions… if the user can articulate the question clearly enough
But when I look at others’ visions of chatbot tutors through the much broader framing we’ve been discussing—they’re clearly missing a lot of what I want. I think these visions also often fail to take seriously just how much a real tutor can do.
I think that’s because the authors of these visions are usually thinking about educating (something they want to do to others) rather than learning
Sam’s excited about the idea of reproducing the paper’s data analysis. It seems to play to their strengths. They notice that the authors used a custom Python package to do their analysis, but that code was never published. That seems intriguing: Sam’s built open-source tools before. Maybe they could contribute here.
Demo, part 2: Guidance in action
If I hire a real tutor, I might ask them to sit beside me as I try to actually do something involving the material
if I hire a real tutor, as an adult, to learn about signal processing, I’ll tell them about my interest in brain-computer interfaces, and I’ll expect them to ground every conversation in that purpose
these chatbot tutors can’t join me where the real action is
If I hire a real tutor, we’ll build a relationship. With every session, they’ll learn more about me—my interests, my strengths, my confusions. Chatbot tutors, as typically conceived, are transactional, amnesic
Finally, people talk about how Aristotle was a tutor for Alexander the Great. But what’s most valuable about having Aristotle as a tutor isn’t “diagnosing misconceptions”, but rather that he’s modeling the practices and values of an earnest, intellectually engaged adult.
A note on ethics
So in some ways, the system I’ve shown is more like a real tutor. But in my ideal world, I don’t want a tutor; I want to legitimately participate in some new discipline, and to learn what I need as much as possible from interaction with real practitioners
One theme for this Design@Large series is the ethics of AI and its likely enormous social impacts. Let me say: I’m tremendously worried about those impacts, in the general case
The famous “bicycle for the mind” metaphor is better because it has no agenda other than the one you bring.
But within the narrower domain of learning, my main moral concern is that we’ll end up trapped on a sad, narrow path. A condescending, authoritarian frame dominates the narrative in the future of learning
The bicycle asks: where do you want to go? Of course, that question assumes your destination is well-known and clearly charted on some map. But those most rewarding high-growth experiences are often centered on a creative project. (ill-structured)
Edited: | Tweet this! | Search Twitter for discussion