Consider the following thought experiment: We include some mechanism in the tablet to inform the teacher in real time about how well his or her pupils are absorbing the material being taught. We free all teachers to experiment with different software, different strategies, and different ways of using the new tool. The rapid feedback loop will make teachers adjust their strategies to maximize performance.
Over time, we will observe some teachers who have stumbled onto highly effective strategies. We then share what they have done with other teachers.
Notice how radically different this method is. Instead of testing the validity of one design by having 150 out of 300 schools implement the identical program, this method is “crawling” the design space by having each teacher search for results. Instead of having a baseline survey and then a final survey, it is constantly providing feedback about performance. Instead of having an econometrician do the learning in a centralized manner and inform everybody about the results of the experiment, it is the teachers who are doing the learning in a decentralized manner and informing the center of what they found.
Pointer from Mark Thoma. Emphasis added.
Read the whole thing. I had never before thought of randomized controlled trials as embedded in a top-down approach to learning. He is suggesting the decentralized learning could be faster. Might the same be true in medicine? And is this also a case against MOOCs?
Hausmann misses the point of RCTs. Causation is HARD. Hausmann’s decentralized strategy has been shown empirically to not work specifically in those fields he advocates. Take education. Over the past 100 years, the world has built millions of schools and educated billions of students in a largely decentralized fashion, yet we known essentially nothing more about primary education than we did in 1900.
RCTs are a technology designed to solve a specific problem- how to assign causation when causation is dense and outcomes maybe difficult to measure. We don’t need RCTs for metallurgy, because everyone with a forge can mix alloys, measure the properties of the resulting steel and get the same answer. We do need them for education, because there are hundreds of hard to measure inputs into each outcome, and what happens to work for one teacher, one year may not be replicable.
It’s a lovely dream. If you had free, perfect, immediate feedback on students’ learning, you would become a more effective teacher, obviously. Problems:
1. teachers are not that smart [technically adept]. almost none of them can contribute to the pool of innovations
2. honest talk about what works in education is buried in the politically driven noise-social-science
3. people delivering new tech to teachers are not that smart.
4. finally, if use of measurement tech isn’t mandatory or passive (video camera on class processed by computer), nobody will do it. think of all the times you resolved to use a simple to-do list or note-taking phone app and didn’t stick with it.
the promise is good and we should have some smart people on it (who hang around and observe actual student-learning-in-classroom scenarios themselves): software needs to be rebuilt only a handful of times before you have something scalable. to the extent software produces a usable product (that ordinary folks can be incented+educated to use), you’re then able to copy it everywhere (which is why we shouldn’t be *too* discouraged by 3.)
i wouldn’t work on such software without a convincing plan for solving 2+4 (we can work on 1 later)
It may be the hypotheses is developed ‘below’ and the precision ‘above.’ This is what I think happens in medicine. As long as teachers are trained, signal, and graded these things have to be decided on, I presume. So what are they going to be trained and judged on?
Sort of like the difference between the Soviet economy and the price system.
It could be that teaching is less like physics, where what works is universal and stable, than like show biz, where what works is idiosyncratic and faddish.
Just like there are always new books in the airport purporting to tell you how to be a better salesman, there will always be new studies in the journals purporting to tell us how to make better teachers. Both are pulp-nonfiction, and for the same reason.
The reason you need RCT’s are that some of the idiosyncratic factors that ‘work’ are really aspects of the personality and character of teacher that are hard to measure or duplicate. That is, if you are trying to rely on ‘fast learning’ at the individual teacher level, you are going to have a hard time figuring out what really made the difference unless you can assume that these characteristics are normally distributed equally on both sides of your trial.
Two teachers can be following the same protocols and teaching the same curriculum, but practically everyone has had the experience of learning from someone with more ‘motivating leader inspiration’ qualities like energy, charisma, good looks (but not too good!) and the fingerspitzengefuhl to appreciate subtle non-verbal cues and comfort-level feedback to constantly re-tailor or repeat the message to suit the audience.
Lawyers, politicians, and salesmen (but I repeat myself) know well that the same script can play completely differently depending on the skills of the players, something that is very hard to learn for non-naturals, who drop out of audience-interaction jobs at a disproportionate rate because of competitive pressures that don’t – and realistically can’t – apply to teachers in the current system.
On MOOCs, they should be thought of as one of the many experiments:
“We free all teachers to experiment with different software, different strategies, and different ways of using the new tool. The rapid feedback loop …”
I actually think multiple MOOCs are better mid-scale labs than any single classroom teacher. And effective teaching is more performance art than has yet been acknowledged by most teachers. Of course, the test of the teaching is the amount actually “learned”, and it seems that the difficulty of getting good feedback reduces the effectiveness of experimentation.