The complex relationship between machine learning and health insurance
Democrats and Republicans fight over the future of Obamacare is a debate that has raised many questions into the future of healthcare. In the United States, which has no universal health care system, former president Obama’s Patient Protection and Affordable Care Act prevented health insurers from discriminating against patients on the basis of their pre-existing conditions, introducing annual or total expenditure ceilings, refusing to renew of the policy to persons who contract certain diseases, or increasing premiums, among other things.
Progress in machine learning means health insurers are increasingly able to develop machine learning systems capable of predicting the costs associated with the health care of a specific patient. If such systems are not properly controlled to prevent abuse, companies could use such systems to maximize their profit refusing to take on customers with higher probabilities of requiring costly treatment.
At the same time, it is clear that the future of healthcare is going to be based on preventive models, as seen by the entry of Apple or Amazon, the progressive availability of new devices and diagnostic tests, the development of genetic analytics, along with any number of wearables and related devices initially dedicated to fitness or wellbeing but of increasing interest in the preventive health market.
Undoubtedly, we are heading toward a future in which we will not only use the health system when we notice the first signs of illness, but one in which a set of devices and practices will allow us to monitor our health on a constant basis. In such an environment, ensuring that the company in charge of my health does not use my data to deny me a policy based on my probability of contracting diseases costly to treat is fundamental: the alternative is to encourage insurers only to sign up healthy people.
The question then, is how to impose control mechanisms on insurers that guarantee ethical behavior, while allowing them to control, for example, patient behavior likely to generate costs? This is not just about patients who smoke, but those who refuse to monitor certain aspects of their health or to undergo certain routine tests, making it more costly to treat a problem that could have been caught earlier. Designing such a system presents a major challenge. At the same time, through the data that a health insurer can obtain about a customer simply from their use of medical services, it is possible to form an idea of their likely risk of disease and to simply refuse them cover, and all this in a sector not noted for its transparency.
If an insurance company can basically charge me what it wants, how can we avoid abuse and unethical behavior once that company has access to increasingly powerful algorithms, able to predict the level of expenditure a patient might incur? How to avoid abuses in an environment where, by definition, one of the parties will have more and better information, both aggregated and individualized, on the other?
(En español, aquí)