op_mahroum2_ BSIPUIG via Getty Images_robot BSIPUIG via Getty Images
en English

Calling Dr. Robot

As in many other economic sectors, artificial-intelligence applications in medicine seem to hold unlimited promise. But to realize AI’s full potential in diagnosis, records management, hospital operations, and other areas of medicine, innovators and regulators alike must heed the lessons of past technological revolutions that failed.

PARIS – Unintended consequences in the field of artificial intelligence (AI) tend to make for lively headlines, such as when Microsoft introduced a Twitter chatbot that quickly began spewing racist slurs. But whether it is a case of Google’s image-recognition algorithm labeling black people as “gorillas” or Tesla’s autonomous vehicles killing their drivers, AI’s controversies have yet to dampen its appeal.

As AI applications multiply, so, too, will the reported failures, leading eventually to a public and regulatory backlash. Nowhere is this truer than in health care, where investment in AI reached an all-time high in the second quarter of 2018. From alleged medical-device failures in Canada and Europe to recent concerns about the performance of IBM’s Watson Health, the risks of adopting new technologies in the health-care sector are clear.

But AI also promises to revolutionize the management of health records and patient risk, diagnosis, hospital operations, and other areas of medicine. It is little wonder, then, that the global AI health-care market is expected to surpass $34 billion by 2025. And public support for the use of AI in health care is already high across a broad range of countries. In the United States, for example, 53,000 patient-monitoring devices – each gathering data for AI-driven predictive analytics – were in use by the end of 2017. That number is expected to reach 3.1 million by 2021 – an annual growth rate of 176%.

Obviously, such rapid growth in the use of AI will have far-reaching implications for the health-care market. But it will also have important social and political consequences. The more people come to rely on AI-driven solutions, the more they will demand a say in how these technologies are developed and deployed.

A Unique Innovation Environment

Technology has been at the heart of health care since antiquity. In just the past few decades, it has played a pivotal role in improving human wellbeing and extending the average lifespan. For most people, it is difficult to imagine a world without x-rays, magnetic resonance imaging, ultrasounds, portable defibrillators, pacemakers, laser surgery, cochlear implants, and artificial organs, even though most of these technologies are relatively new.

Yet, despite the familiarity of many health technologies, the health-care sector is actually a peculiar place for new innovations to grow. Given that most of the products and services introduced in health care have a direct effect on human lives, the risks and costs of failure have always been high.

Moreover, the prospective users of new health technologies will almost always be health-care practitioners who are constrained by their own limited knowledge, as well as by ethical codes and regulations. Health care, after all, is understandably more regulated and regimented than any other sector. And this regulatory landscape is further complicated by the fact that rules and standards can differ from country to country, and sometimes even within countries.

The entry of new players has also added to the complexity of the health-care market. Traditional technology companies such as GE, Philips, and Siemens have long been present in the sector, but now they are competing with tech giants like Apple, Amazon, Google, and Microsoft, each of which is developing new AI applications.

From Diffusion to Disillusion

As with any product-development plan, a key indicator of success is rapid uptake by end users. To achieve that, the big tech companies are actively pursuing marketing strategies that tend to overpromise the benefits of AI products, while understating the risks of failure. Implicit in their best-case scenarios is an emphasis on originality over reliability. But while novelty may well appeal to enthusiastic early adopters of run-of-the-mill consumer products, it has less purchase in health care. Most patients are reluctant to serve as guinea pigs.

PS Events: Climate Week NYC 2024
image (24)

PS Events: Climate Week NYC 2024

Project Syndicate is returning to Climate Week NYC with an even more expansive program. Join us live on September 22 as we welcome speakers from around the world at our studio in Manhattan to address critical dimensions of the climate debate.

Register Now

The enthusiastic early adopter is a well-known concept in innovation studies – a field pioneered by the American scholar Everett Rogers in 1962, and later popularized by the organizational theorist Geoffrey A. Moore in his best-selling book, Crossing the Chasm. According to Moore, early adopters – the first 20% of people or organizations to use a new technology – play a key role in moving a product across the “chasm” from niche to mainstream, because they provide the crucial feedback necessary to improve its usability and broaden its appeal. Early adopters, Moore argues, are generally happy with a minimally viable product as long as it offers exciting new functions. But most other users require more convincing, and they expect a fully de-bugged product.

Though the early-adopter approach is a high-risk strategy that puts the future of a product in the hands of amateurs, technology companies entering the health-care sector have embraced it. But it is not obvious that the same rules apply here. As licensed health-care providers, early and late adopters are all already operating at the same relative skill level and within the same confines.

In fact, in 2001, the British gynecologist J.W. Scott argued that innovations in health and medicine often follow a different path, now known as “Scott’s Parabola.” According to Scott, novel medical techniques – often following the emergence of new technologies – can gain currency among medical practitioners and become the new standard treatment very quickly on the back of initially positive feedback. But with time, reports of failures or potential drawbacks begin to emerge, thus reversing the process. As negative headlines begin to pile up, the initial benefits are forgotten, and the public turns against the treatment altogether.

IBM’s “Watson for Oncology” has followed a similar pattern. After initially showing great promise, the supercomputer was found to have delivered “multiple examples of unsafe and incorrect treatment recommendations.” The other technology companies now rushing to the market with their own AI applications should heed that precedent, lest they, too, succumb to Scott’s Parabola.

Another cautionary tale comes from the medical-device industry, where eager practitioners and aggressive marketers have created an environment of irrational exuberance. For example, research shows that between 2006 and 2013, the practitioners and hospitals that were most overconfident in adopting implantable cardiac defibrillators (ICDs) had higher mortality rates, and were thus quicker to scale back their use of the technology.

Adoption Methods

Generally, for an innovation to gain currency over the long term, prospective users need to be intimately acquainted with how it functions. Only then can they determine how it compares to existing technologies, what skills and infrastructure it requires, and whether it is compatible with professional values and codes of ethics. Making such determinations, however, requires firsthand experimentation and a close accounting of costs and benefits. So, how should AI developers and health-care regulators proceed?

There is no easy answer, because a wide range of overlapping factors can determine how, and at what pace, technological innovations are adopted. In the case of health care, such factors include financial incentives, labor-market conditions, the relative skill level of credentialed practitioners in a given setting, and the overarching regulatory regime for medical technologies.

For example, a 2011 study in Health Policy found that hospitals where providers receive a lump-sum fee for all services provided to a patient in a given Diagnosis-Related Group (DRGs) show “higher levels of technology diffusion” than hospitals that do not. On the other hand, in the absence of such constraints, providers might still adopt new technologies to stand out among the crowd, while passing the costs along to the end payer. Similarly, physicians who are reimbursed on the basis of performance might also have an incentive to embrace new efficiency- or outcome-enhancing innovations.

Complicating matters further is the fact that AI could be adopted not just by physicians and hospitals, but also by digital health-care platforms, insurance companies and payers, individual patients, and many other groups. But whether AI is used for diagnostics, treatment, coordination of care, or various business-side purposes, the paradox of Scott’s Parabola remains: fast and wide diffusion does not mean a product will succeed, and could even work against it in the event of a negative backlash.

Regardless of how fast machines become, humans will always need time to understand how new AI applications work and what effect they are having on their operating environment. If an organization adopts a technology prematurely or in haste, it runs a greater risk of encountering errors and malfunctions. And if the costs appear to be outweighing the benefits, the technology may have to be abandoned altogether.

The Tortoise Strategy

Another risk AI developers face is “suboptimal lock-in.” If an eager technology company rushes a minimally viable AI tool to market, it runs the risk of becoming forever associated with a suboptimal level of performance. In this case, the first-mover strategy will have backfired, by enabling latecomers with more developed products to gain the upper hand.

Given this possibility, technology companies developing health-care AI applications would do well to learn from pharmaceutical companies, which tend to take a longer, slower route to market. They should take the time to work with what economist Eric von Hippel calls “lead users.” These are users “whose present strong needs will become general in a marketplace months or years in the future.” Lead users can play the role of co-designers, guiding the development of a new technology or service from its minimally viable stage to that of a finished product.

To harness this dynamic, the government of Singapore recently introduced a “regulatory sandbox” to facilitate innovation in the delivery of health-care services over telemedicine platforms. The program allows lead users (in this case service providers) to engage with potential beneficiaries (patients) in a transparent, safe regulatory environment. Likewise, the government of Abu Dhabi has introduced guidelines, and thus legal clarity, for the deployment of AI-based products and services in the health-care sector.

Yet these examples are the exceptions. Generally speaking, governments have been slow to develop a legal and regulatory framework to shepherd new AI applications from development to deployment, despite the fact that many countries are pursuing national AI strategies, some of which even target health care as a high-priority sector.

Specifically, the OECD estimates that just half of its member states have national policies to ensure that data from electronic health records is accessible to clinicians and used to monitor disease outbreaks, facilitate research, and improve patient safety. This represents a major wasted opportunity. With smart regulation, governments can help build public trust in AI applications and provide clear minimal parameters for designing “whole products.” In the absence of a forward-looking regulatory framework, otherwise promising solutions may flame out early.

Reining in the Revolution

In 1989, Regina Herzlinger of Harvard Business School argued that a potential revolution in health care had failed as a result of mismanagement. At the time, “profound changes in technology, in population characteristics, and in social expectations” had led many to believe that the US health-care system could be transformed for the better.

But those leading the revolution, Herzlinger writes, “were so blinded by the vision of the dazzling new world they hoped to forge that they neglected the details of management that would breathe life into their vision.” Like some in AI today, they overestimated the rewards of new applications, while discounting the potential costs.

Fortunately, those leading the AI revolution in health care still have time to adjust course. To that end, they should adopt a demand-driven approach to innovation, so that lead users are involved directly in the design and testing of solutions under quasi-clinical conditions. This strategy would ensure that users’ needs are actually being met, while preventing the Icarus-like rise and fall of new AI applications predicted by Scott’s Parabola.

Finally, governments, too, must step up as regulators. The medical-device failures of recent years should serve as a wake-up call. Before the AI revolution gains further momentum, we need stricter standards and a system for monitoring the effects of new technologies on individual and public health. Otherwise, the opportunity to reform a sector that desperately needs it will be lost once again.

https://prosyn.org/6PhGjea