A.I. Won’t Transform War. It’ll Only Make Venture Capitalists Richer. | The New Republic

Earlier this month, the techno-optimists of the military-industrial complex convened in downtown Washington, D.C., to plot a brave new
world of war. At the AI Expo for National Competitiveness, sponsored by defense
firm Palantir (of Peter Thiel fame), developers and policymakers heralded
artificial intelligence as not only the inescapable future of international
conflict but also the savior of mankind. You see, these A.I.-powered weapons
of war, which are already
in use, will actually save us from global destruction—and anyone who
predicts otherwise is the real warmonger. “The peace activists are war
activists,” said Palantir CEO Alex Karp, according
to The Guardian.
“We are the peace activists.”

The “peace activists” is Karp’s derogatory term for A.I.
skeptics, who, as the technology has proliferated from chatbots to the weapons
industry, have become understandably worried about its potentially apocalyptic consequences.
Foreign policy experts routinely express fears of robot
wars, endless
wars, and unregulated wars where the ethics
and laws of war
don’t apply. “The unconstrained development of autonomous weapons
could lead to wars that expand beyond human control, with fewer protections for
both combatants and civilians,” defense analyst Paul Scharre wrote in Foreign Affairs earlier this year.

These dual perspectives—the faith in A.I. to liberate
humans from the battlefield and the fear of a doomsday scenario—have created a
lucrative environment for venture capitalists in Silicon Valley, including Palantir.
They’re winning significant defense contracts based not
only on their rosy promises of a war without human casualties but also on the
government’s anxiety about falling behind in the A.I. race among the world’s
other major military powers.

The Department of Defense wants to support a new
generation of venture capital start-ups that will purportedly give America a
substantial edge over China in an era of “great-power
competition.”  The Pentagon
believes that the ingenuity of Silicon Valley will give the United States the
technological prowess to deter Beijing from taking aggressive action in Taiwan—or
the South China Sea—for fear it will be unable to win a potential confrontation
with the U.S.

This notion was promoted most forcefully in a speech that Deputy Defense Secretary Kathleen Hicks gave last year in front of members of
the arms industry’s largest trade group, the National Defense Industrial
Association. She took the occasion to announce the Replicator initiative, an
ambitious plan to produce a new generation of weapons driven by artificial
intelligence and other emerging technologies. The goal is to develop systems
that can be produced relatively cheaply and in large quantities, while also
being able to be replaced in short order if large numbers are lost in
battle.

There is a seductive logic to this new approach.
America’s military arsenal is composed mostly of large weapons platforms like
aircraft carriers and F-35 combat aircraft. These systems are expensive, hard
to maintain, and difficult to replace without many months, or even years, of
work. Major platforms like aircraft carriers are also increasingly vulnerable to
next-generation missile systems. Given this reality, the idea of a more cost-effective, dispersed, and replicable set of weapons systems makes sense.

But the Replicator initiative is unlikely to yield such
systems. It is just the latest example of how faith in technology can generate
false hope in the ability of the Pentagon and the arms industry to produce
systems that can actually transform the face of warfare or bestow a decisive
advantage to the nation that develops new, “revolutionary” systems first.

Technological optimism is nothing new in U.S. defense
planning. From nuclear weapons of the 1950s onward to the “electronic
battlefield” in Vietnam, to Ronald Reagan’s dream of an
impenetrable Star Wars missile defense shield, to the networked warfare
developed as part of the so-called “revolution
in military affairs” in the 1990s, U.S. military history is littered with
tales of miracle weapons that either didn’t perform as advertised or were
ill suited to the wars our military was actually called upon to fight.

There are strong reasons to think that emerging A.I. military
technologies could not only fail to pay off in superior capabilities but actually
make the world a more dangerous place.

On the performance front, a military system built around
complex software will be vulnerable to malfunctions or cyberattacks. As
longtime military analyst Michael Klare has noted, many
experts fear that “AI-enabled systems may fail in unpredictable ways, causing
unintended human slaughter or uncontrolled escalation.” The poor performance of
small drones built by U.S. tech start-ups in the war in Ukraine could be a
cautionary tale about setting unrealistic expectations about the next round of
purported miracle systems. An investigation by The Wall Street Journal found that “most small drones from U.S.
startups have failed to perform in combat, dashing companies’ hopes that a
badge of being battle-tested would bring the startups sales and attention.”

The second risk is that these technologies will
dramatically reduce the “kill
chain”—the time from the identification of a target to its
destruction. This will put enormous pressure on human operators of these new weapons
and could easily lead to the development of robotic systems that operate
without human intervention. While the Pentagon has so far ruled out such an
approach, the realities of operating these systems in real-life situations may
override that restriction.

All of the above suggests that we should proceed with
caution before rushing to center the U.S. arsenal on A.I.-driven systems and
other emerging technologies. But there is money to be made in going full speed
ahead, and that could undermine the U.S. government’s ability to take a
deliberate approach to fielding next-generation systems.

As we lay out in a new
issue brief for the Quincy Institute for Responsible Statecraft, a handful
of leaders in the venture capital community—including firms like Founders Fund
and Andreesen Horowitz—have led the charge to throw billions in investment
funds at emerging tech start-up companies. How large these investments are is
not entirely clear, but figures cited
have ranged from $6 billion to over $100 billion in the past few years alone.
And that’s before Saudi Arabia concludes a proposed deal to work with Andreesen
Horowitz to invest $40
billion in the A.I. sector, a move that should be carefully
scrutinized by Congress and executive branch regulators.

The new V.C.-funded emerging tech sector is urging the
Pentagon to move rapidly to develop and deploy its products, pressing for
more funding, and, perhaps more importantly, less rigorous monitoring procedures
in the development of military uses for A.I. and other new technologies. And,
as The New York Times has reported,
Silicon Valley defense producers and funders are adopting traditional lobbying
methods to get their way, including the hiring of large numbers of former
military officers and senior government officials to go to bat for them in
Washington. There is a danger that the growing power of military-oriented V.C. firms and the companies they support will succeed in accelerating the process
of integrating emerging technologies into the U.S. military without adequate
safeguards, to the detriment of our safety and security.

If we want to head off a profit-driven rush toward a
dangerous new technological arms race, it will be up to interested members of
Congress, working with the Biden administration, to craft concrete proposals
and regulations to manage the role of private money in the development of
emerging military technologies, including ensuring that the growing political
clout of these new arms profiteers doesn’t distort policy outcomes. For
starters, this should mean the revival of the Office of Technology Assessment,
which provided crucial advice to lawmakers on the budgetary and security implications
of new inventions until it was eliminated as part of the “Gingrich revolution”
of the 1990s.

More transparency about who is investing in this sector
should also be part of a new regulatory framework. And it is essential that the
revolving door between government and the arms industry be more strictly
regulated, through measures like longer “cooling off” periods before government
officials are allowed to lobby for weapons firms, and the elimination of
loopholes that allow too many ex-government officials to avoid revolving-door
strictures by adopting misleading titles like “strategic adviser.”

The goal of these efforts should be twofold: preventing a
new wave of corruption and shoddy work enabled by a headlong rush to deploy new
systems without adequate safeguards; and carefully assessing the strategic
benefits and dangers of militarized A.I. and other emerging tech, with an eye
toward limiting or prohibiting the deployment of certain technologies if the
risks outweigh any potential gains.

There is no doubt that A.I. is a prominent
feature of modern warfare, one that poses risks and dangers to
global security. But it is unlikely that A.I. will transform the landscape of
war anytime soon. Terminator-style wars are not on the horizon. In the
meantime, venture capitalists will continue to seek profits from untested
technology that may pose dire consequences to humanity.

Michael Brenes is a non-resident fellow at the Quincy Institute and interim director of the Brady-Johnson Program in Grand Strategy at Yale University.

William D. Hartung is a senior research fellow at the Quincy Institute.