Slate has an interesting look at what went wrong with HealthCare.gov
Finkel described the data hub as the master switchboard for the entire sign-up and registration process. By integrating with “external information sources, such as government databases,” it would 1) verify a consumer’s data, including citizenship and identity, and 2) issue queries to these various databases as needed to “verify applicant information data [and] determine eligibility for qualified health plans.” The data hub did not have any of this information itself, nor did users use it directly. Rather, the hub acted as the intermediary between the healthcare.gov website, where consumers would input their information, and a variety of other databases containing consumer and health insurance information, coordinating between them. QSSI “developed” the data hub for CMS and was responsible for “ensuring proper system performance, including maintenance.”
The government has repeatedly claimed that various problems of healthcare.gov are due to server overload—too many people attempting to sign up. The data hub would certainly be ground zero for such load issues, but not the only one. If any of the other databases it spoke to were overloaded, the sign-up process would break anyway. The conundrum may not even be in the data hub or in healthcare.gov, but in some pre-existing citizenship database that’s never had to cope with the massive crush of queries from the hub.
The Slate writer quotes a federal contractor involved in creating the front end — the web pages people see when they come to the site — as blaming the procurement process:
Development Seed President Eric Gundersen oversaw the part of healthcare.gov that did survive last week: the static front-end Web pages that had nothing to do with the hub. Development Seed was only able to do the work after being hired by contractor Aquilent, who navigated the bureaucracy of government procurement. “If I were to bid on the whole project,” Gundersen told me, “I would need more lawyers and more proposal writers than actual engineers to build the project. Why would I make a company like that?”
The article wraps up with the remark that government procurement needs to be improved. That is likely true, but it slightly misses the problem. The problem is that this work was privatized to begin with. That decisions almost guaranteed, by itself,that the web site would be a failure.
When a company gives work over to the another firm, it loses the ability to control that work. Once a contract has been signed, the outsourcing company — the government in this case — loses control over the product or service. The company providing the outsourced services have only to live up to the contract. SLAs — service level agreements, the items on which the provider are measured — become king. The performance of the provider is measured by their ability to adhere to the SLAs. Making the situation worse, the providers are often allowed to largely set the conditions of acceptance and the SLAs themselves.
Most outsourcing is done via a process known as an RFP — a request for proposal. The response to these RFPs very often include a set of proposed SLAs and acceptance conditions. In many, many cases, those initial suggestions form the basis for all negotiations. Sometimes, the proposed terms are allowed to stand; the negotiations are mostly around the cost of the service. Any monetary punishments are based on the conditions of acceptance and SLAs.
What this means in practice is that the IT management loses control of the IT outcomes. The people providing the service are not responsible to the IT management. Rather, they are responsible to their management, which care only about the SLAs. This is often compounded by the limited penalties for failure and the constant, ongoing negotiations about what an SLA “means” between the provider and senior management of the customer. Instead of having a tam dedicated to the success of your enterprise, you have a team dedicated to the success of the contract as measured by what their management says the SLAs mean. Instead of people working toward success, you have people working toward a line in a contract. This is made even worse when there are multiple vendors responsible for different systems or even aspects of the same system, as was apparently the case with Healthcare.gov. Integration of systems can be a difficult process in the best of circumstances; it approaches impossible when the various systems are owned by different companies driven by contracts that have very clear and defined SLAs and acceptance conditions — terms that very often have nothing to do with making the whole of the enterprise work effectively.
This is by no means a government only problem. I have worked in in-house and outsourced environments throughout my career and have seen these issues in every outsource agreement I have ever been a part of. One of the dirty secrets of outsourcing is that much of the cost savings come in the form of lower quality and in lost opportunities: the contracts provide a break on doing new work, because if the work is not defined in the contract, it requires extra payments. There are several examles of outsourcing gone wrong and there is growing recognition that the costs are much higher than generally understood.
Outsourcing is often defended not just on cost means, but also on the grounds that IT is not a “core competency” of the organization. This is often the justification for outsourcing such functions of government. This is almost entirely nonsense. In the modern age, any organization that depends on customer service has IT as a core competency. In the modern age, smooth access to information and the ability of the customer to easily complete tasks is central to any sense of satisfaction. Leaving those items in the hands of people who aren’t, by design, committed to the success of your organization. Allowing people to sign up for health care is the core competency of heathcare.gov. Outsourcing the tools that allow that to happen makes no sense. And leads to the failures we have seen this week.