The book by Jerry Muller will be out shortly. It makes a strong case against the over-use of quantitative measures to fix compensation. Education is one example.
If you believe the null hypothesis, then compensating teachers based on outcomes only introduces randomness into their pay.
Meanwhile, without referring to the book, in talking about health care and education, Megan McArdle writes,
So when we measure outputs, we are getting at best a very distorted picture of the value of the services provided. Modern industrial management is simply not designed for this sort of situation. If you feed human inputs into a machine system, you are quite likely to grind up the humans in the process.
Read the whole thing.
The new system induces hospitals to take only the cases that might have good outcomes. That was he essence of the article, that is not the Null hypothesis. And, it seems the plan is working, we should get more years of life, overall, from the discrimination. Where am I wrong? Am I violating a moral principle? OK, but the article was about metrics, that was the assumption. Don’t jump on me about morality until we get a post on morality.
What did you think of McArdle’s unique take on CBO transparency?
For an in-depth take on the abuse of quantitative measures, I really loved Scott Alexander’s book review of Seeing Like A State.
Of course, the book is more in depth than the review, but even from the (long) review you can see where the temptation to abstract until abuse comes from.
5 High Modernist Principles:
1 – there can no compromise with existing infrastructure
2 – human needs can be abstracted and calculated
3 – the solution is universal and applies everywhere
4 – all of the relevant rules should be explicitly determined by technocrats then followed to the letter
5 – there is nothing whatsoever to be gained or learned from the people subject to the rules
So when we measure outputs, we are getting at best a very distorted picture of the value of the services provided. Modern industrial management is simply not designed for this sort of situation. If you feed human inputs into a machine system, you are quite likely to grind up the humans in the process.
Is another version of the weak resistance against Big Tech and Data? I do work in an semi-office data environment and this stuff will not go away and in fact growing stronger every day. The increase of Big Tech and Data has only increased because it is profitable to go down this way. (And yes there are lots of hiccups.)
In terms of libertarian economist, Yourself and Megan Mcardle, I am not sure what you are complaining about here. It is incredibly de-humanizing in the long run but companies find it profitable and beneficial to them. So why would it stop? (Much like the security state in which a lot of the increase of tech and camera security is private actors not governments.)