The exchanges will be mostly working by March 2014, but by then the risk pool will be dysfunctional. In the meantime, real net prices will creep up, if only through implicit rationing and restrictions on provider networks. The Obama administration will attempt to address this problem — unsuccessfully — through additional regulation.
I disagree with the March 2014 date. My opinion of the distribution of likely outcomes is that it is bimodal. There is a high probability that the exchanges will be working at the end of November. I think that there is an even higher probability that they will be working never.
Why the end of November is plausible:
1. We may be hearing overly dire descriptions of the state of the system. Anyone from the outside who looks at a system will think it cannot possibly work. I once worked with a CIO who said that one of his iron laws was that anyone new on a systems project would say, “The person who originally designed this system was an idiot.”
2. Jeff Zients, the new manager, put a stake in the ground for late November. If he thinks that it is unlikely, he is, if nothing else, taking big personal risk to his own reputation.
3. CMS, the agency that was in charge, is actually a pretty effective group. They also pulled off something similar in implementing the Medicare prescription drug plan. It is plausible that they anticipated the major problems, and only minor fixes are now needed.
Why “never” is even more plausible:
1. The fundamental challenges may be very great. In the worst case, suppose that some of the legacy systems that the ACA web site has to access were built around 1975. Back then, in order to pull data out of such a system, you wrote a SAS program, put it into the queue of an IBM-370, waited for an operator to mount a data tape, and if all went well (no JCL errors, no logic errors in your SAS code), after a couple of hours you had a nice fat printout. Now, we want to be able to query those systems in real time, with perhaps hundreds of queries arriving at once…..Hmmm, maybe not even hotshot web programmers wielding the latest methodological buzzwords can pull that off so easily
2. While the Suits talk about bugs or glitches, it looks to geeks as though the problem is design flaws. Those are very hard to fix, particularly in a system that is so large already. Everything about the existing design is there for a reason. It may not be a good reason, but if you “fix it” without understanding the reason, you could be in for a nasty surprise. I just don’t see how you fix design flaws in four weeks.
3. Everyone says that there was not sufficient time to test the system before putting it into operation. Between now and the end of November, it does not seem as though there is sufficient time to test any major changes to the system. If you redesign parts of the system, then you have to write a test plan that is appropriate for the new design. Four weeks may not even be enough time to write a good test plan, much less carry it out.
4. If it is still broken at the end of November, the chances increase that starting over is the fastest path to a working system. But starting over requires a stronger political consensus in favor of the policy that the system is supposed to implement. And we do not have that.
Megan McArdle has more comments, focusing on the disconnect between thinkers and doers in the Obama Administration.
Who knew that an economist could have such a general understanding of software development. For the record, I’m not being sarcastic – as someone who has a lot of experience in the field, I think your observations are spot-on. I’ve enjoyed your commentary.
“While the Suits talk about bugs or glitches, it looks to geeks as though the problem is design flaws.”
This lept out at me. I’ve seen it a hundred times, and so has anyone who works in software. You deliver what they asked for and they say, yes but what we really MEANT was…
Are you suggesting they are running legacy software on legacy hardware? I doubt you could keep a 1975 mainframe running this long — it’s not like they still make the parts. They don’t even use the technology that made the chips that made the memory boards for those things.
If they are running emulated or recompiled legacy code, it might not be as slow as you think. Back in 1975, a million CPU instructions was a lot. Now it’s nothing. The same code running on modern hardware would be blindingly fast compared to a modern, interpreted language, or even modern compiled code padded with lots of sophisticated libraries.
I’m guessing the only way to “fix” this quickly is to trust all the user input and not verify anything. Then they don’t have to query IRS or anyone else. They just list some insurance plans, give some subsidy estimate, and rely on the insurance companies to sort it all out when they actually sign people up. It will all be a mess, but then they’ll be able to blame evil insurance companies, not the government.
The problems are going to be all management, planning, testing, etc. If they really start throwing out designs and yelling “do something!”, the project could just dissolve into a broken mess. To me, it’s just not credible to expect anything to get done in a month on a system this size. Without throwing out all their process, they couldn’t make and test even trivial changes in that time.
Looking at the front end web site (where you can just “View Page Source” in the browser), they didn’t optimize much at all. The same is true of the California exchange, where they are hitting the server 10 times harder than they have to by including lots of scripts and style sheets. They are blowing 2.2 meg and 20 hits on a five-line form! And all the UI goop they added just makes the site harder to use if you have grandma-level web skills. It’s covered with annoying popups that have to be dismissed correctly, for example.
So I’m not seeing a quality product in the areas where we can see the code. I think it was all just “contract it out and hope for the best.” I’d hate to be one of the programmers at these contract places. It must be a real death-march there now.
“I’m guessing the only way to “fix” this quickly is to trust all the user input and not verify anything. Then they don’t have to query IRS or anyone else. They just list some insurance plans, give some subsidy estimate, and rely on the insurance companies to sort it all out when they actually sign people up. It will all be a mess, but then they’ll be able to blame evil insurance companies, not the government.”
Yeah, I’ve had similar thoughts.
“I’m guessing the only way to “fix” this quickly is to trust all the user input and not verify anything. Then they don’t have to query IRS or anyone else. They just list some insurance plans, give some subsidy estimate, and rely on the insurance companies to sort it all out when they actually sign people up. It will all be a mess, but then they’ll be able to blame evil insurance companies, not the government.”
Given the political incentives at play, and the Doctor’s-Plot style posturing the administration is trying out viz-a-viz cancellations and insurance companies, I think you may be on to something there. Someone like Zients doesn’t end up doing what he did at OMB without a ready competency in such things. Given how Josh Barro let the mask drop the other day there will be no absence of smart people who will be happy to push such a narrative because, after all, the final outcome will be good for us. The dishonesty involved (from their perspective) is no worse than the lie you tell a kid on a long car trip to placate them: “we’ll be there in fifteen minutes.” Give em’ ice cream when you arrive and it’s all good.
“””Given how Josh Barro let the mask drop the other day”””
What is this referring to?
Ah, probably this:
http://www.businessinsider.com/your-private-health-insurance-is-really-a-government-program-2013-10
I was referring to his twitter feed. He stated that many government programs are premised on the truth that people don’t know what’s good/best for them. I don’t have the URL near to hand.
And his statement was specifically referencing ACA and its attendant controversies.
I’m inclined to suggest that the number of half-billion dollar IT projects that have come in on-time, on-budget with features expected at the beginning fo the project…totalled across the history of the world…can be counted in the low single digits. Perhaps none at all ever.
The nature of large IT projects is that there is a real amount of time that a project takes, and there is effectively nothing inside of normal boundaries that can be done from a management side that will alter that. And that the amount of effort required to complete a project has a +/- 400% error margin at the beginning of a project. Usually +. IT projects cannot (nature of the problem) have the certainty necessary to do what Obamacare was trying to do on fixed timeframes.
“Why the end of November is plausible:”
“Why “never” is even more plausible:”
arnold does have a software development background and sold a software company, so he’s not “just an economist” as one of the commentators expresses surprise about.
still, arnold doesn’t recognize the uncertainty of applying his past experiences to an atypically large project. Nor does he seem to be aware he has very little actual information about ACA technical problems. Jeff Zients is far more successful as an entrepeneur, and more importantly has far more information, responsibility and accountability than arnold. But arnold still thinks his “never gonna work” is more likely than ziets…. Perhaps hubris is essential to building a blogging audience. It’s certaintly worked for krugman. Regardless, it’s a #bayesianfail.
(caveat: I also am a software entrepreneur. In my case still practicing meaning I have to admit what I don’t know.)
Zients also has an incentive to spin this positively. Do you really think his word is as frank or candid as Kling’s?
Furthermore, Kling explicitly acknowledges Zients as a reason to believe a functioning website before December has a high probability.
Snarky (and therefore potentially very innaccurate) thoughts on two lines from the linked McArdle post:
“””The White House frequently weighed in on items such as the user interface of the website or various policy details, but it didn’t appear much interested in the information-technology portion.”””
Bikeshedding
“””The IT folks apparently did not do a good job of communicating their growing sense of dread to the rest of the people involved in this massive project.”””
I wonder how the other folks would have responded to bad news: “Oh, I guess our deadline needs to slip, or features need to be cut” or “You guys suck. Fix it.” The answer might affect the IT folks’ willingness to raise concerns.
I copied that second quote before I read further down the article. McArdle makes the good point that the gulf between non-technical folks’ expectations and reality was so big, it’s unlikely the technical folks just clammed up completely.
The problem, IMHO, is that on one hand he says he has a punch list of 100 items. But if they didnt do a complete test – then I can PROMISE that they haven’t even tested the more complicated logic. They couldn’t have – as the easy stuff didn’t work. So he thinks he has a punch list of a 100 items – but the real list is going to be 1000s of items. And they can’t even create a comprehensive list by dec 1.
I think you’ve hit the nail on the head. Given the lack of upfront testing, what will regression testing find. I suspect the first few iterations may well uncover more bugs that they fix.
Are the ACA exchanges a “Red Sox” technology?
haha. The Red Sox are no longer a Red Sox technology. The original concept is here: http://www.ideasinactiontv.com/tcs_daily/2004/03/red-sox-technologies.html