The Big Picture
The AI Safety Debate Is All Wrong
Today's anxiety about the risks posed by artificial intelligence reflects a tendency to anthropomorphize AI and causes us to focus on the wrong issues. Since any technology can be used for good or bad, what ultimately matters is who controls it, what their objectives are, and what kind of regulations they are subjected to.
BOSTON – A huge industry has emerged in recent years as China, the United States, the United Kingdom, and the European Union have made the safety of artificial intelligence a top priority. Obviously, any technology – from cars and pharmaceuticals to machine tools and lawnmowers – should be designed as safely as possible (one wishes that more scrutiny had been brought to bear on social media during its early days).
But simply raising safety concerns isn’t enough. In the case of AI, the debate is focused far too much on “safety against catastrophic risks due to AGI (Artificial General Intelligence),” meaning a superintelligence that can outperform all humans in most cognitive tasks. At issue is the question of “alignment”: whether AI models produce results that match their users’ and designers’ objectives and values – a topic that leads to various sci-fi scenarios in which a superintelligent AI emerges and destroys humanity. The best-selling author Brian Christian’s The Alignment Problem is focused mostly on AGI, and the same concerns have led Anthropic, one of the main companies in the field, to build models with their own “constitutions” enshrining ethical values and principles.
But there are at least two reasons why these approaches may be misguided. First, the current safety debate not only (unhelpfully) anthropomorphizes AI; it also leads us to focus on the wrong targets. Since any technology can be used for good or bad, what ultimately matters is who controls it, what their objectives are, and what kind of regulations they are subjected to.
No amount of safety research would have prevented a car from being used as a weapon at the white supremacist rally in Charlottesville, Virginia in 2017. If we accept the premise that AI systems have their own personalities, we might conclude that our only option is to ensure that they have the right values and constitutions in the abstract. But the premise is false, and the proposed solution would fall far short.
To be sure, the counterargument is that if AGI was ever achieved, it really would matter whether the system was “aligned” with human objectives, because no guardrails would be left to contain the cunning of a superintelligence. But this claim brings us to the second problem with much of the AI safety discussion. Even if we are on the path to AGI (which seems highly unlikely), the most immediate danger would still be misuses of non-superintelligent AI by humans.
Suppose that there is some time (T) in the future (say 2040) when AGI will be invented, and that until this time arrives, AI systems that don’t have AGI will still be non-autonomous. (If they were to become self-acting before AGI, let that day be T.) Now consider the situation one year before T. By that point, AI systems will have become highly capable (by dint of being on the cusp of superintelligence), and the question that we would want to ask is: Who is in control right now?
The answer would of course be human agents, either individually or collectively in the form of a government, a consortium, or a corporation. To simplify the discussion, let me refer to the human agents in charge of AI at this point as Corporation X. This company (it could also be more than one company, which might be even worse, as we will see) would be able to use its AI capabilities for any purpose it wants. If it wanted to destroy democracy and enslave people, it could do so. The threat that so many commentators impute to AGI would already have arrived before AGI.
In fact, the situation would probably be worse than this description, because Corporation X could bring about a similar outcome even if its intention was not to destroy democracy. If its own objectives were not fully aligned with democracy (which is inevitable), democracy could suffer as an unintended consequence (as has been the case with social media).
For example, inequality exceeding some threshold may jeopardize the proper functioning of democracy; but that fact would not stop Corporation X from doing everything it could to enrich itself or its shareholders. Any guardrails built into its AI models to prevent malicious use would not matter, because Corporation X could still use its technology however it wants.
Likewise, if there were two companies, Corporation X and Corporation Y, that controlled highly capable AI models, either one of them, or both, could still pursue aims that are damaging to social cohesion, democracy, and human freedom. (And no, the argument that they would constrain each other is not convincing. If anything, their competition could make them even more ruthless.)
Thus, even if we get what most AI safety researchers want – proper alignment and constraints on AGI – we will not be safe. The implications of this conclusion should be obvious: We need much stronger institutions for reining in the tech companies, and much stronger forms of democratic and civic action to keep governments that control AI accountable. This challenge is quite separate and distinct from addressing biases in AI models or their alignment with human objectives.
Why, then, are we so fixated on the potential behavior of anthropomorphized AI? Some of it is hype, which helps the tech industry attract more talent and investment. The more that everyone is talking about how a superintelligent AI might act, the more the public will start to think that AGI is imminent. Retail and institutional investors will pour into the next big thing, and tech executives who grew up on sci-fi depictions of superintelligent AI will get another free pass. We should start paying more attention to the more immediate risks.
AI and National Security
It is a truism that technology moves faster than policy or diplomacy, especially when it is driven by intense market competition in the private sector. But when it comes addressing the potential security risks associated with today's AI, policymakers need to pick up their pace.
ASPEN – Humans are a tool-making species, but can we control the tools we make? When Robert Oppenheimer and other physicists developed the first nuclear fission weapon in the 1940s, they worried that their invention might destroy humanity. Thus far, it has not, but controlling nuclear weapons has been a persistent challenge ever since.
Now, many scientists see artificial intelligence – algorithms and software that enable machines to perform tasks that typically require human intelligence – as an equally transformational tool. Like previous general-purpose technologies, AI has enormous potential for good and evil. In cancer research, it can sort through and summarize more studies in a few minutes than a human team can do over the course of months. Likewise, it can reliably predict patterns of protein folds that would take human researchers many years to uncover.
But AI also lowers the costs and the barriers to entry for misfits, terrorists, and other bad actors who might wish to cause harm. As a recent RAND study warned, “the marginal cost to resurrect a dangerous virus similar to smallpox can be as little as $100,000, while developing a complex vaccine can be over $1 billion.”
Moreover, some experts worry that advanced AI will be so much smarter than humans that it will control us, rather than the other way around. Estimates of how long it will take to develop such superintelligent machines – known as artificial general intelligence – vary from a few years to a few decades. But whatever the case, the growing risks from today’s narrow AI already demand greater attention.
For 40 years, the Aspen Strategy Group (consisting of former government officials, academics, businesspeople, and journalists) has met each summer to focus on a major national-security problem. Past sessions have dealt with subjects such as nuclear weapons, cyber-attacks, and the rise of China. This year, we focused on AI’s implications for national security, examining the benefits as well as the risks.
Among the benefits are a greater ability to sort through enormous amounts of intelligence data, strengthen early-warning systems, improve complicated logistical systems, and inspect computer code to improve cybersecurity. But there are also big risks, such as advances in autonomous weapons, accidental errors in programming algorithms, and adversarial AIs that can weaken cybersecurity.
China has been making massive investments in the broader AI arms race, and it also boasts some structural advantages. The three key resources for AI are data to train the models; smart engineers to develop algorithms; and computing power to run them. China has few legal or privacy limits on access to data (though ideology constrains some datasets), and it is well supplied with bright young engineers. The area where it is most behind the United States is in the advanced microchips that produce the computing power for AI.
American export controls limit China’s access to these frontier chips, as well as to the costly Dutch lithography machines that make them. The consensus among the experts in Aspen was that China is a year or two behind the US; but the situation remains volatile. Although Presidents Joe Biden and Xi Jinping agreed to hold bilateral discussions on AI when they met last fall, there was little optimism in Aspen about the prospects for AI arms control.
Autonomous weapons pose a particularly serious threat. After more than a decade of diplomacy at the United Nations, countries have failed to agree on a ban of autonomous lethal weapons. International humanitarian law requires that militaries discriminate between armed combatants and civilians, and the Pentagon has long required that a human be in the decision-making loop before a weapon is fired. But in some contexts, such as defending against incoming missiles, there is no time for human intervention.
Since the context matters, humans must tightly define (in the code) what weapons can and cannot do. In other words, there should be a human “on the loop” rather than “in the loop.” This is not just some speculative question. In the Ukraine war, the Russians jam Ukrainian forces’ signals, compelling the Ukrainians to program their devices for autonomous final decision-making about when to fire.
One of the most frightening dangers of AI is its application to biological warfare or terrorism. When countries agreed to ban biological weapons in 1972, the common belief was that such devices were not useful, owing to the risk of “blowback” on one’s own side. But with synthetic biology, it may be possible to develop a weapon that destroys one group but not another. Or a terrorist with access to a laboratory may simply want to kill as many people as possible, as the Aum Shinrikyo doomsday cult did in Japan in 1995. (While they used sarin, which is non-transmissible, their modern equivalent could use AI to develop a contagious virus.)
In the case of nuclear technology, countries agreed, in 1968, on a non-proliferation treaty that now has 191 members. The International Atomic Energy Agency regularly inspects domestic energy programs to confirm that they are being used solely for peaceful purposes. And despite intense Cold War competition, the leading countries in nuclear technology agreed, in 1978, to practice restraint in the export of the most sensitive facilities and technical knowledge. Such a precedent suggests some paths for AI, though there are obvious differences between the two technologies.
It is a truism that technology moves faster than policy or diplomacy, especially when it is driven by intense market competition in the private sector. If there was one major conclusion of this year’s Aspen Strategy Group meeting, it was that governments need to pick up their pace.
AI Complacency Is Compromising Western Defense
The US and European technology sectors are behaving like a circular firing squad, with individual firms attempting to sell as much to China as possible so that they can gain a lead on their immediate competitors. Unless that changes, the West could fall behind its adversaries in AI-driven warfare.
SILICON VALLEY – Just as the West has been forced into confrontation with Russia and China, military conflicts have revealed major systemic weaknesses in the US and European militaries and their defense-industrial bases.
These problems stem from fundamental technology trends. In Ukraine, expensive manned systems such as tanks, combat aircraft, and warships have proven extremely vulnerable to inexpensive unmanned drones, cruise missiles, and guided missiles. Russia has already lost more than 8,000 armored vehicles, a third of its Black Sea fleet, and many combat aircraft, leading it to move its expensive manned systems farther from combat zones.
Inexpensive mass-produced drones made by China, Russia, Iran, Turkey, and now Ukraine have become both crucial offensive weapons and valuable tools for surveillance, targeting, and guidance. Often based on widely available commercial products, drones are being produced by the million at a cost of just $1,000-$50,000 apiece. Yet no such drones are made in the United States or Western Europe – a major weakness in the West’s industrial base and military posture.
While Russian, Chinese, and Iranian drones are easy to destroy using existing Western systems, the costs are prohibitive – ranging from $100,000 to $3 million per target. This unsustainable ratio is the result of decades of complacency and bureaucratic inefficiency. No legacy Western contractor produces a cost-competitive anti-drone system – though several US and Ukrainian startups are developing them now.
Worse, this situation is merely a prelude to a future of unmanned autonomous weapons. Most current drones are either controlled remotely by a human or simplistically guided by GPS or digital maps. But new AI technologies – based largely on publicly available academic research and commercial products – will soon transform warfare, and possibly terrorism, too.
AI-enabled drones can already operate in highly coordinated swarms, for example by enabling an attacker to surround a target and prevent its escape. Targeting itself is becoming extremely precise – down to the level of identifying an individual face, an item of religious clothing, or a specific vehicle license plate – and drone swarms are increasingly able to navigate through cities, forests, and buildings. One example among many is a 2022 paper published in Science Robotics by Chinese academic researchers showing drone navigation through a forest.
Commercial and military humanoid robots are next. Videos published by researchers at Stanford University in January, based on their recent research, show AI-driven robots performing household tasks including pan-frying seafood and cleaning up spilled wine. While cooking shrimp is far from operating a sniper rifle or assembling missile components, there is wide agreement that the “ChatGPT moment” in humanoid robotics has arrived.
AI-driven products, both military and commercial, depend on a complex, layered technology stack, at the base of which is semiconductor capital equipment (the high-precision machines that make the chips), followed by semiconductors (such as Nvidia’s AI processors), data centers, AI models and their training data, AI cloud services, hardware product design, manufacturing, and application and systems engineering. The US, Western Europe, Taiwan, and South Korea collectively are still ahead of China (and Russia) in most of these areas, but their lead is narrowing, and China already dominates world markets for mass-produced dual-use hardware such as drones and robots.
The Western response to this challenge has thus far been woefully inadequate. Export controls on AI-enabling technologies are limited to semiconductor capital equipment and processors, and even these have been resisted, loosened, and evaded. While exports of high-end AI processors to China have been banned, access to US cloud services using those same processors remains open, and Nvidia now provides China with AI processors nearly as powerful, but specially tailored to comply with US export controls. There are no export or licensing controls whatsoever on AI research, models, or training data.
Although some US companies, such as Google, have kept their AI models proprietary and restricted Chinese access to their technology, others have done the opposite. While OpenAI prohibits direct Chinese access to its application programming interfaces, those same APIs remain available through Microsoft. Meanwhile, Meta has embraced a fully open-source strategy for its AI efforts, and the venture capital firm Andreessen Horowitz is lobbying to prevent export controls (or indeed any regulatory restrictions) on open-source AI models.
The US and European technology sectors are thus behaving like a circular firing squad, with individual firms attempting to sell as much to China as possible. By trying to gain a lead on its immediate competitors, each firm weakens the long-run position of all the others, and ultimately even its own. If this continues, the foreseeable result is that the US and Western Europe will fall behind China – and even behind Russia, Iran, or decentralized terrorist groups – both in AI-driven warfare and in commercial AI applications.
Many technologists and managers in Silicon Valley and government organizations are aware of this risk, and are very disturbed by it. But despite some significant initiatives (such as the Defense Innovation Unit within the Pentagon), there has been relatively little change in defense-industry behavior or government policy.
This situation is particularly absurd, given the obvious opportunity for a hugely advantageous grand bargain: industry acquiescence to government-enforced export controls in return for government-supported collective bargaining with China in technology licensing, market access, and other commercial benefits. Notwithstanding a few areas of genuine tension, there is a strikingly high degree of alignment between national-security interests and the long-run collective interests of the Western technology sector.
The logical strategy is for the US government and the European Union to serve as bargaining agents on behalf of Western industry when dealing with China. That means acting in concert with industry, while also retaining the power and independence necessary to establish and enforce stringent controls (which the industry should recognize are in its own long-term interest).
Unfortunately, this is not where things are currently headed. Although policymakers and technologists are waking up to the threat, the underlying technology is now moving dramatically faster than policy debates and legislative processes – not to mention the product cycles of the Pentagon and legacy defense contractors. AI development is progressing so blindingly fast that even the US startup system is straining to keep up. That means there is no time to lose.
China Is Exporting Its AI Surveillance State
Contrary to what many Western policymakers and commentators once hoped, recent analyses have added to the evidence that trade does not always foster democracy or liberalize regimes. Instead, China’s greater integration with the developing world appears to be doing precisely the opposite.
TURIN – US President George H.W. Bush once remarked that, “No nation on Earth has discovered a way to import the world’s goods and services while stopping foreign ideas at the border.” In an age when democracies dominated the technological frontier, the ideas Bush had in mind were those associated with America’s own model of political economy.
But now that China has become a leading innovator in artificial intelligence, might the same economic integration move countries in the opposite direction? This question is particularly relevant to developing countries, since many are not only institutionally fragile, but also increasingly connected to China via trade, foreign aid, loans, and investments.
While AI has been hailed as the basis for a “fourth industrial revolution,” it is also bringing many new challenges to the fore. AI technologies have the potential to drive economic growth in the coming years, but also to undermine democracies, aid autocrats’ pursuit of social control, and empower “surveillance capitalists” who manipulate our behavior and profit from the data trails we leave online.
Since China has aggressively deployed AI-powered facial recognition to support its own surveillance state, we recently set out to explore the patterns and political consequences of trade in these technologies. After constructing a database for global trade in facial-recognition AI from 2008 to 2021, we found 1,636 deals from 36 exporting countries to 136 importing countries.
From this dataset, we document three developments. First, China has a comparative advantage in facial-recognition AI. It exports to roughly twice as many countries as the United States does (83 versus 57 links), and it has about 10% more trade deals (238 versus 211). Moreover, its comparative advantage in facial-recognition AI is larger than in other frontier-technology exports, such as radioactive materials, steam turbines, and laser and other beam processes.
While different factors may have contributed to China’s comparative advantage, we know that the Chinese government has made global dominance in AI an explicit developmental and strategic goal, and that the facial-recognition AI industry has benefited from its demand for surveillance technology, often receiving access to large government datasets.
Second, we find that autocracies and weak democracies are more likely to import facial-recognition AI from China. While the US predominantly exports the technology to mature democracies (these account for roughly two-thirds of its links, or three-quarters of its deals), China exports roughly equal amounts to mature democracies and autocracies or weak democracies.
Does China have an autocratic bias, or is it simply exporting more to autocracies and weak democracies across all products? When we compared China’s exports of facial-recognition AI to its exports of other frontier technologies, we found that facial-recognition AI is the only technology for which China displays an autocratic bias. Equally notable, we found no such bias when investigating the US.
One potential explanation for this difference is that autocracies and weak democracies might be turning specifically to China for surveillance technologies. That brings us to our third finding: autocracies and weak democracies are more likely to import facial-recognition AI from China in years when they experience domestic unrest.
The data make clear that weak democracies and autocracies tend to import surveillance AI from China – but not from the US – during years of unrest, rather than pre-emptively or after the fact. Imports of military technology follow a similar pattern. By contrast, we do not find that mature democracies import more facial-recognition AI in response to unrest.
A final question concerns broader institutional changes in these countries. Our analysis shows that imports of Chinese surveillance AI during episodes of domestic unrest are indeed associated with a country’s elections becoming less fair, less peaceful, and less credible overall. And a similar pattern appears to hold with imports of US surveillance AI, though this finding is less precisely estimated.
At the same time, we do not find any association between surveillance AI imports and institutional quality among mature democracies. So, rather than interpreting our findings as the causal impact of AI on institutions, we view imports of surveillance AI and the erosion of domestic institutions in autocracies and weak democracies as the joint outcome of a regime’s pursuit of greater political control.
Interestingly, we also find suggestive evidence that autocracies and weak democracies importing large amounts of Chinese surveillance AI during unrest are less likely to develop into mature democracies than peer countries with low imports of surveillance AI. This suggests that the tactics employed by autocracies during times of unrest – importing surveillance AI, eroding electoral institutions, and importing military technology – may be effective in entrenching non-democratic regimes.
Our research adds to the evidence that trade does not always foster democracy or liberalize regimes. Instead, China’s greater integration with the developing world may do precisely the opposite.
This suggests a need for tighter AI trade regulation, which could be modeled on the regulation of other goods that produce negative externalities. Insofar as autocratically biased AI is trained on data collected for the purpose of political repression, it is similar to goods produced from unethically sourced inputs, such as child labor. And since surveillance AI may have negative downstream externalities, such as lost civil liberties and political rights, it is not unlike pollution.
Like all dual-use technologies, facial-recognition AI has the potential to benefit consumers and firms. But regulations must be carefully designed to ensure that this frontier technology is diffused around the world without facilitating autocratization.
Andrew Kao contributed to this commentary.
We all know the trope: a machine grows so intelligent that its apparent consciousness becomes indistinguishable from our own, and then it surpasses us – and possibly even turns against us. As investment pours into efforts to make such technology – so-called artificial general intelligence (AGI) – a reality, how scared of such scenarios should we be?
According to MIT’s Daron Acemoglu, the focus on “catastrophic risks due to AGI” is excessive and misguided, because it “(unhelpfully) anthropomorphizes AI” and “leads us to focus on the wrong targets.” A more productive discussion would focus on the factors that will determine whether AI is used for good or bad: “who controls [the technology], what their objectives are, and what kind of regulations they are subjected to.”
Joseph S. Nye, Jr., agrees that, whatever might happen with AGI in the future, the “growing risks from today’s narrow AI,” such as autonomous weapons and new forms of biological warfare, “already demand greater attention.” China, he points out, is already betting big on an “AI arms race,” seeking to benefit from “structural advantages” such as the relative lack of “legal or privacy limits on access to data” for training models.
As Oscar-winning filmmaker and tech investor Charles Ferguson explains, China now “dominates world markets for mass-produced dual-use hardware such as drones and robots.” And while the US, Western Europe, Taiwan, and South Korea still lead China (and Russia) in most of the technologies comprising the “stack” that underpins AI-driven products, their “lead is narrowing.” Given the slow pace of policy debates and legislative processes – “not to mention the product cycles of the Pentagon and legacy defense contractors” – they may soon fall behind.
Another area where China is advancing fast is surveillance technology, such as facial-recognition AI. As MIT’s Martin Beraja, Harvard’s David Y. Yang, and the University of Oxford’s Noam Yuchtman found in a recent study, “autocracies and weak democracies” are lining up to buy what China is selling. Worryingly, they are particularly likely to do this in years when they experience domestic unrest, and these countries appear to be “less likely to develop into mature democracies than peer countries with low imports of surveillance AI.” As with other goods that generate negative externalities, “tighter AI trade regulation” is in order.