Silicon Valley's plan to conquer the world through AI
~~ recommended by tpx ~~
By Peter Byrne — PDF version available here
In 2003, Silicon Valley venture capitalist Peter Andreas Thiel launched an espionage startup with a $30 million investment. The Central Intelligence Agency’s investment arm, In-Q-Tel, contributed $2 million. Thiel named their privately owned firm “Palantir” after the Seeing Stone used by Dark Lord Sauron to locate his enemies in Lord of the Rings.
Pumped by years of secretive contracts with the CIA, Department of Homeland Security’s ICE, National Security Agency, Federal Bureau of Investigation, Air Force, Special Operations Command, and the Los Angeles Police Department, Palantir Technologies went public in 2020 with a $16 billion market valuation.
The self-described anti-democracy advocate, Thiel, 58, continues to rule the globally expanding military intelligence-focused enterprise.
As a public company, Palantir is compelled to disclose more of its financial and operational activities than when it was held privately by Thiel and his partners. Although much of Palantir’s governmental work is classified as secret, the Seeing Stone of public records renders insights into its operations and socioeconomic weaknesses.
Despite earning billions of dollars since its inception, Palantir has regularly declared net losses on its books of account, resulting in substantial tax benefits, according to annual reports filed with the Securities and Exchange Commission. In 2024, however, Palantir posted a profit of $462 million on revenue of $2.8 billion, with 55 percent of its earnings from United States and foreign government contracts. The firm recorded a cash hoard of $5.2 billion. With 3,900 international employees, Palantir now registers an exceptionally high market value of about $200 billion.
By comparison, military manufacturing behemoth Lockheed Martin, with 121,000 employees, is valued at $111 billion, slightly more than half the value of Palantir.
Nonetheless, despite its profitability and considerable cash savings, Palantir has paid zero dollars in federal income taxes since 2020—nada.

Screenshot from Palantir SEC Annual Report, 2024
In possible violation of United States antitrust laws, Palantir is reportedly forming a military contract bidding consortium with a Thiel-financed AI weapons firm named Anduril, or “Flame of the West,” after the magical sword wielded by the elven aristocrat Aragorn in Lord of the Rings. Privately owned by Thiel and partners, Anduril Industries also benefits from large tax subsidies and federal small business contracting privileges despite a current market value of $183 billion. Other members of the Palantir-Anduril contracting consortium reportedly include OpenAI, Scale AI, SpaceX, and Marc Andreessen-funded autonomous warship builder Saronic.
In December, Palantir announced the formation of Warp Speed, another “cohort of companies” including Anduril Industries, L3Harris, Panasonic Energy of North America, and Shield AI. Thiel and his associates are spinning a multibillion-dollar web of military AI projects centered around Palantir and Anduril, creating a consortium of businesses that might otherwise be competing for contracts. Thiel’s companies have declared an intent to present a united front when bidding for US and international military and intelligence agency contracts. The birthing process of these corporate alliances resembles the early days of the nineteenth-century railroad and oil trust monopolies that were eventually broken apart by Congress and President Theodore Roosevelt.
Economies of industrial scale and lobbying power enhanced by corporate combinations featuring Thiel’s companies are engineered to absorb temporary financial losses in order to ensure future monopolies on marketing AI weaponry systems. The members of developing military AI consortia can collectively outfinance, outcompete, and acquire smaller firms. Due to the software-centric nature of AI-enabled warfare, they can run technological circles around the five “Prime” weapons corporations that dominate the continually consolidating military-industrial complex.
The Primes—Lockheed Martin, Raytheon/RTX, Northrop Grumman, General Dynamics, and Boeing—are experiencing institutional difficulties adjusting the salability of their expensive fleets of “legacy” hardware to the political and cultural speed with which military AI machinery is being successfully hyped, by “disruptive” and “innovative” “entrepreneurs” and “journalists,” as the inescapable future of warfare.
Peering under the hood of the emergent AI-driven war machine, it appears that self-described “patriotic” Silicon Valley military investment firms are not, as they claim, attempting to inject efficiency into the historically unauditable contracting practices of the military-intelligence-corporate-academic entity waging America’s wars. Rather, the aptly named “Surveillance Valley” is moving toward replacing the Prime’s lumbering formations of battleships, bombers, fighter jets, tanks, rockets, and submarines with cheaper fleets of autonomously operating combat machines.
This primarily profit-seeking practice is bound to result in military and socioeconomic catastrophe, as AI Now researcher Meredith Whittaker concluded in “The Steep Cost of Capture,” “This is a battle of power, not simply a contest of ideas, and being right without the strategy and solidarity to defend our position will not protect us.”
Hold on to that thought.
As with the Primes, which have more or less dictated Pentagon budgets and foreign policy since World War II, the rising class of military AI venture capitalists is not interested in fostering international economic cooperation, nor nuclear and autonomous weapon disarmament treaties—quite the opposite. Like the Primes, they require hot wars to move product; they loudly agitate for war and seek enemies.
The globalized AI warfighting enterprise includes thousands of large and small companies internationally, fueled by hundreds of billions of dollars, euros, yen, and yuan in financial bets. The United States, Russia, China, United Kingdom and EU nations, India, Australia, South and North Korea, Israel, United Arab Emirates, Qatar, and Saudi Arabia are all striving to construct their own military AI capacities—using software, hardware, strategic minerals, industrial parts, and labor drawn from throughout the globe to make and program advanced chips and autonomous war machines. AI-enabled weapons systems are being battle-tested in Ukraine, Occupied Palestine, Lebanon, Syria, and throughout the world, and while they can cause great harm, they do not work as advertised. In fact, the AIs have trouble telling what time it is.

In the land of Mordor, in the fires of Mount Doom, the Dark Lord Sauron forged, in secret, a Master Ring to control all others. And into this Ring he poured his cruelty, his malice and his will to dominate all life. One Ring to rule them all. —J.R.R. Tolkien, The Lord of the Rings (1954)
Gemini AI image prompted by Peter Byrne
The Oligarchic AI Media Echo Chamber and China
In February, Palantir CEO and co-founder Alexander Caedmon Karp, 58, published The Technological Republic, hagiographically reviewed by TechBro billionaire Jeff Bezos’s Washington Post. The Post’s Bob Ivry applauded Karp’s framing of China as the eternal enemy of all that is good in the world and defended the multi-billionaire’s crusade to “refocus [Silicon Valley] engineering genius on helping America to defend Western values by developing weapons to kill our enemies before our enemies develop weapons to kill us.”
In Karp’s brave new AI world, “The atomic age is over[taken by] artificial intelligence, and its fathomless array of potential military uses.” The Post review fails to mention that Palantir was called out in 2020 by Amnesty International for lethally harassing migrants worldwide. Nor did the Post report that Palantir products put humans in the crosshairs for annihilation by explosive weapons connected to satellite-relayed, AI-guided “kill chains.” And that the persons targeted by Palantir’s lethal technologies are often children, non-combatants, and those fighting for liberation from oppression. Or that China’s military repositioning mirrors American aggression.
The Post also neglects to mention that Amazon Web Services, which Bezos owns, is partnered with Karp and Thiel in a multibillion contract using Amazon’s super-polluting data centers (euphemized as a “cloud”) to connect targeting coordinates logged by Palantir’s data mining technology with “kinetic” weapons that can be fired at the will of AI systems embedded in autonomous killing machines designed by Anduril.
In a self-promoting video for his book, Karp goes full Corleone with a touch of Third Reich: “The way you dominate the future is you dominate the present. I want America to be American in a thousand years. Americans are the most loving, God-fearing, fair, least discriminatory people on the planet … If you’re waking up and thinking about harming American citizens … or if you are a foreign power sending fentanyl to poison our people, something really bad is going to happen to you and your friends and your bank account and your mistress.”
Last year, Time published a glossy spread on Karp and Palantir’s role in running a lethal network of weaponized drones and counter-drone systems deployed in Ukraine against Russian forces. Breathlessly amplifying Karp’s tendentious claim to have revolutionized the “digital kill chain” with Palantir’s AI arsenal, reporter Vera Bergengruen observed, “Ukraine and its private sector allies say they are playing a longer game: creating a war lab for the future. … Says Karp: ‘There are things that we can do on the battlefield that we could not do in a domestic context.’”
Time’s adoration of Karp concludes with a “quote” supporting the testing of AI weapons in Ukraine from a spokesperson for an AI weapons industry trade and lobby organization called RAIN, which Bergengruen describes as “a research firm that specializes in defense AI.”
Time, of course, is owned by Salesforce’s Marc Benioff, who has invested fortunes in more than 150 AI-focused startups, including SandboxAQ, a military AI firm also financed by the CIA that is developing quantum weapons.
Typical of the public relations of military AI companies owned by Silicon Valley oligarchs, SandboxAQ promotes itself as “advised” by a celebrated array of “former” intelligence and military officials and academics, including a Deputy Director of National Intelligence, Susan Gordon, and Stanford professor Fei-Fei Li, who is vocally opposed to AI regulation, owns large stakes in AI companies, and was a leader of Google’s controversial Project Maven, a massive military AI surveillance effort.
SandboxAQ, it turns out, is chaired by Google’s former CEO, Eric Schmidt, who is a Google-stock empowered billionaire urging the United States to attack and destroy China’s industrial capabilities. (Toward that end, Google, Microsoft, Amazon, and Oracle jointly hold a $9 billion contract for Joint Warfighting Cloud Capability to facilitate warring on China and other constructed enemies—more on which later.)
Yet another Silicon Valley executive crowing for war with China is Shyam Sankar, a Palantir employee who is slated to captain research and technology at the Pentagon under the Trump-Vance-Musk governmental apparatus.
Amplifying the “AI war is good for American business” mantra is the influential weapons industry-funded think tank, Center for New American Security, which is calling for reinstituting the military draft in order to better invade and occupy China, since, apparently, autonomous weaponry cannot do everything.
In fact, there is a lot that AI weapons cannot do.
AI JADC2 GIGO!
The twenty-first-century dream of US military and intelligence agencies is to create an AI-supervised Joint All Domain Awareness Command and Control (JADC2) for operating world wars. Tens of thousands of orbiting satellites owned and operated by Musk’s SpaceX, Amazon’s Project Kuiper, and the global positioning mechanism manufactured by L3Harris are slated to perform as a planet-encircling nervous system for informing a warfighting JADC2. Operating at light speeds during simultaneously waged air, land, sea, and space battles worldwide, the JADC2 network is intended to be supervised by performatively sclerotic Large Language Models, i.e., the Anthropic-Palantir-Amazon Web Services combat version of Claude and ultraviolent versions of the OpenAI-Microsoft Chats.
The military AI industry envisions the JADC2 as capable of sorting and categorizing endlessly streaming exabytes of optical and infrared images, internet, telephony, and weapons systems intercepts. A natural language user interface resembling a Chat will allow human or AI commanders to order strikes on enemy installations and personnel, and to authorize autonomous weapons to make lethal decisions during combat. Human soldiers wearing virtual reality goggles inside the battle spaces will fire weapons as directed by bots.
A more realistic appraisal of the JADC2 acknowledges the intractable technical problems with implementing Silicon Valley’s vision of AI-controlled warfare. To start, the electromagnetic spectrum is easy to jam. Making sense out of the jammable electronic noise of battle is much harder than jamming—broadcasting radio waves rendering enemy control systems incommunicado—and combatants will be jamming each other, creating an overwhelming electronic cacophony.
The generically proposed solution to dealing with the inevitable loss of military communication channels during military campaigns is to deploy fleets of autonomous war satellites, airplanes, ships, submarines, and ground vehicles armed with directed energy weapons capable of using embedded AI decision-making faculties to locate and destroy targets without requiring either human decision makers or uninterrupted JADC2 services.
That decentralization strategy is also not realistic. The so-called “wicked problem” with unleashing swarms of autonomous battle weapons, such as those being symbiotically developed by Anduril, Palantir, and their partners, boils down to the inherently problematic nature of the AI training datasets used for programming a (data generalizing) JADC2 or a (data particularizing) weapon. Despite Silicon Valley’s mass public relations hypnosis concerning artificial intelligence, machine “learning” is not intelligent, nor is it genuinely logical, nor is it validly comparable to biological brains, as an important article in Science explained in February.
Rather, the sociologically biased and epistemologically fragile neural nets are glorified statistical search engines, reports Fordham Law Review. And, Microsoft researchers report that tests on generative AI language and image models reveal the core technology as constitutionally insecure—easy to hack and control.
Machine learning requires access to vast amounts of raw information in order to develop algorithmically predictive capacities, however uncertain. The “foundation” models created by OpenAI, Anthropic, xAI, Alibaba, Mistral AI, DeepSeek, and others are largely trained on internet pages, Wikipedia, Reddit, Common Crawl, Twitter, Facebook, Instagram, corporate news, and, increasingly, previously generated, error-strewn, AI constructions boosted into the Web.
Large language model technology can power the creation of “narrower” models, which can be trained on relatively small, discreet, particularized datasets (such as thousands of mammograms, or Target’s consumer data) for predicting possibly viable correlations. But the large-scale foundation models are not trained on medical, government, corporate, academic, military, industrial, and scientific knowledge bases to which their web-scrapers are denied entry. The models are unavoidably trained on the opinions, memes, misrepresentations, political and cultural spins, and virtual anti-realities that permeate the internet. Swimming in an ocean of digital garbage, the Chats present the results of both factual and faulty computations anthropomorphically, awing consumers as if rocks are speaking to them.
The “answers” regurgitated from training materials ingested by the Chats are not based on the logics of “good old fashioned” computation, such as “if A, then B, and not C.” Artificially intelligent computation is currently based upon unobservable, probabilistic crunching of incomplete, often error-ridden, culturally and politically biased datasets: Hence the Chat’s continual creation of spurious correlations or “hallucinations.”
The seemingly inescapable reality of “garbage in, garbage out” or GIGO means that reliance on machine learning for combat advice or decisive action is not likely to win wars. Such machine reliance can obviously lose wars. Nevertheless, the military AI industry proceeds apace because venture capital as a modality only cares if the market believes in the viability of AI warfare long enough for investors to profit from taking a startup public—or to secure multi-billion dollar government contracts.
Military AI machine trainers are attempting to incorporate oceans of information and imagery vacuumed by devices constantly surveilling battles in Ukraine and the Middle East. But before operationally useful “answers” to commander queries can be computed from the phenomenal amount of real-time information scooped up from the wild, the incoming streams of photons and electrons must be correctly labeled by armies of machines and humans and tested and transformed into reliable data. Military neural nets can also be easily “poisoned” by the deliberate and undiscoverable insertion of tiny bits of erroneous data or a deformed pixel into their training corpora, thereby sabotaging so-called “reasoning” capacities based on machine learning.
In short, there cannot be a reliable database containing enough factual information to encompass the near infinite numbers of possibilities emerging during warfighting to effectively train a reliable JADC2 and battalions of autonomous weapons to act non-chaotically.
Military AI researchers are also attempting to ameliorate the problems of hallucination and GIGO by eschewing the use of real-world data to train battle Chat interfaces. Instead, they are training large language and image models on simulated battlespace scenarios using the Unreal Engine software that powers many online gaming platforms. And the problem with that approach is that simulated battle scenarios are, indeed, unreal. Online gaming is rule-bound play; it is not actual combat in which rules are meant to be broken; war is not a game.
A related issue with making strategic or tactical military judgments based on searching historically-bound datasets for coherent correlations relevant to the present is that, during the urgency of battle, too much information is emitted by tens of thousands of sensors for time-pressured centralized processing. And, inevitably, some of the sensors will be captured and “spoofed” by the enemy, GIGO.
Another fundamental flaw in the concept of creating a JADC2 uber-warfighting machine is that the enemy—who, whether human or machine, can emerge unexpectedly and powerfully—also has agency. Tactical battles can chaotically escalate into strategic wars in nanoseconds as AI weapons-wielding combatants simultaneously jam each other’s signals while trying to separate meaningful intelligence from noise and making life-and-death decisions while being targeted. Hallucinations and predictive errors are potentially cataclysmic because the JADC2 is designed to interface with nuclear weapon communication, command, and control systems. In the sum of all fears, stochastically parroting machine learning devices are not ontologically formulated to make viable sense of a commander’s life and death queries when their knowledge bases are cognitively deficient, congenitally erroneous, inevitably polluted, and easily confusable, GIGO.
The “fog of war” is a real phenomenon, and it cannot be dispelled by artificial intelligence as we know it. In war, the unexpected always makes the difference between victory and defeat. Military AI cannot be programmed to predict the unexpected. Neither a JADC2 nor a drone can be trained to deal with the unknown.
Nonetheless, JADC2 programs are being funded by the Pentagon with multiple hundred-million-dollar checks made out to Palantir, Anduril, OpenAI, Microsoft, Oracle, and Amazon. The US Air Force, Army, and Navy each have their own versions of the JADC2; each major armed service and intelligence agency is contracting with various combinations of corporations to build out a series of proprietary warfighting AI systems that are not strategically designed to interface.
What could go wrong?
AI Antitrust Solutions?
During the late nineteenth century, oil and railroad oligarch John D. Rockefeller of the Standard Oil Company practically invented the practice of monopoly capital. Adumbrating the future structure of Amazon.com, Rockefeller and his fellow robber barons terminally underpriced competitors, purchased the assets of the broken competition on the cheap, and created a dollar-chomping “octopus” of multi-industrial companies while paying politicians to look askance.
However, in the face of outrageously predatory pricing, family-killing recessions and financial panics, and the exposure of hidden business connections and foul manufacturing practices by muckraking journalists, there arose a public and legislative backlash and laws to curb unbridled corporate greed. Since the 1890s, federal and state antitrust laws have progressively criminalized businesses that conspired to fix prices, rig bids, bankrupt competitors, monopolize markets, and purchase politicians.
In October 2020, a Congressional antitrust subcommittee released a damning report on its yearlong investigation of the monopoly practices of consumer-manipulating Amazon, Apple, Facebook, and Google. The 400-pager observed, “By controlling access to markets, these giants can pick winners and losers throughout our economy. They not only wield tremendous power, but they also abuse it by charging exorbitant fees, imposing oppressive contract terms, and extracting valuable data from the people and businesses that rely on them.”
The Sherman Antitrust Act and the Clayton Act ban mergers, partnerships, and practices that restrict free trade. Case-tested laws prohibit persons or corporate entities from making business decisions at otherwise competitive enterprises that they also own, and from using market dominance in one sector to establish dominance in another. Despite periods of politically imposed regulatory inactivity, federal and state antitrust laws have often been wielded to (somewhat) weaken the market power of corporations such as Standard Oil, American Tobacco, Kodak, American Telephone and Telegraph, and more recently, JetBlue (airlines), Visa (credit cards), CVS (pharmacies), and Live Nation (concert ticket sales).
Under the recent leadership of antitrust attorney Lina Khan, the Federal Trade Commission was (and may still be) moving toward reigning in consumer-facing Silicon Valley monopolies, including Apple, Meta, and Alphabet-Google. As Khan noted in her analysis of Amazon’s monopoly practices, the law urges governments and courts to stop emerging monopolies before they get too big to regulate. Palantir and Anduril are not yet the modern equivalent of the much-maligned Standard Oil Company, which was broken into (still immensely powerful) fossil fuel corporations. But the spectacle of software-based companies uniting as a contracting consortium could invite charges of violating statutory prohibitions against business combinations that harm competitors and unduly enhance market power.
Last year, ProPublica reported that Palantir’s major business partner, Microsoft, appears to be violating antitrust laws by distributing Azure cloud software to federal agencies gratis until the competition is vanquished and Microsoft can inflate prices because it has gained a monopoly. Notably, due to its interlocking business divisions, Microsoft was targeted in the late 1990s for a corporate breakup. In court, twenty state attorney generals and the Department of Justice prevailed in an antitrust lawsuit to curb Microsoft’s market monopoly. Upon appeal, Microsoft was partly disciplined but escaped fragmentation. And today, the ineluctably monopolistic Microsoft octopus is spreading AI weaponry business tentacles, intertwining with similar octopi, some of whom it fancies as edible.
Mitch Stoltz, the IP Litigation Director at the Electronic Frontier Foundation, told Military AI Watch, “Microsoft broke the law. The monitoring that resulted from the lawsuit probably stopped Microsoft from strangling Google and Facebook in the crib. Today, the genuine concern about an emergent monopoly in the AI-based weapons market will certainly benefit from early scrutiny by the federal government or state attorney generals.” And, monopolies can have competitors.
Seeing Inside Palantir
Palantir, too, is growing tentacles. The cyber espionage firm has positioned itself to gatekeep and micromanage Pentagon procurement practices through its FedStart contracting service on Microsoft’s Azure cloud. If a software company pays Palantir as much as $1 million annually, Palantir promises to accelerate its military procurement processes. Palantir sells clients its professed ability to bypass years-long waits to obtain contractor “accreditation” and security clearances. It promises to expedite contract awards and payments. The implication is that a small firm will be trapped in DOD procurement limbo—unironically called the “Valley of Death” by military AI contractors—unless it hires Palantir to be the Godfather.
Meanwhile, the owners of common stock in Palantir Technologies do not receive dividends, despite the corporation’s profitability, and they have no say in management. Through an intricately structured share system, the profits and control of Palantir are primarily vested in Thiel, Karp, and cofounder Stephen Andrew Cohen, who appear to treat the public company as their private fiefdom.
Palantir paid $7.5 million last year for operating the self-styled “progressive philosopher” Karp’s executive airplane. And while the loquacious CEO is Palantir’s public face, it is Thiel who calls the shots. And he is not interested in paying dividends to public shareholders or taxes to governments that provide the bulk of Palantir’s revenue.
To reiterate, Palantir’s 2024 annual report reveals that the firm has transformed decades of declared operating losses into substantial tax benefits. Palantir has paid no federal taxes since it went public, even though it recorded a historic profit in 2024. The company is banking half a billion dollars in future tax deductions. And it currently holds $5.2 billion in “net positive cash flow” savings, which its managers are grabbing.
Concurrent with the election of Donald Trump and Thiel’s apprentice, James David Vance, the price of Palantir’s stock tripled. Instead of paying dividends to its ordinary shareholders, Palantir’s board approved using $1 billion in cash to repurchase a special class of shares reserved for its founders, top executives, and board members. And, in February, they started cashing in as the pace of Congressionally-funded AI weaponry contracts for Palantir and Anduril increased after the Republican ascendancy.
At the same time, prominent members of House of Representatives committees overseeing armed forces and border control management bought substantial amounts of Palantir stock, according to Capital Trades. Substantial Palintir buys were made by Julie Johnson (D-TX), who sits on Homeland Security’s Border Security and Enforcement Subcommittee; Marjorie Taylor Greene (R-GA) who sits on Homeland Security’s Terrorism and Intelligence Subcommittee; Gil Cisneros (D-CA) who sits on the Armed Services Intelligence and Special Operations Subcommittee; and Ro Khanna (D-CA), the ranking member of the Armed Services Committee. Supposedly “progressive” politically, Khanna, who represents the Silicon Valley region, has long held a massive investment portfolio of AI weapons-related corporations, including Microsoft, Amazon, and Nvidia. When it comes to supporting military AI and wars, there is little or no political gap between Republican and Democratic officials.
Along Came Anduril
The AI-enabled robotic weapons company Anduril was founded in 2017 by Palmer Lucky, 33, who, alongside Thiel, was one of the first Silicon Valley billionaires to fund Trump’s political campaigns. Led by Thiel’s Founders Fund, Anduril has raised $3.8 billion in private placements from AI weapons-focused investors, including from Andreessen and Thrive Capital, which is operated by Joshua Kushner, Trump’s son-in-law’s brother.
The stock market values Anduril at about $28 billion. Vance, who cut his venture capitalist teeth at Thiel’s Mithril Capital (named after a fictional white metal mined in Tolkien’s Middle Earth), has disclosed a stake in Anduril. The US vice president operates a Thiel-seeded venture capital firm with yet another Lord of the Rings branding, Narya Capital. In Tolkien talk, “Narya” means “Ring of Fire.”
Founders Fund proudly advertises its militarist worldview and multibillion-dollar investments in military contractors, including SpaceX, Palantir, and Anduril. Founders Fund partner Trae Stephens, a former intelligence agency and Palantir employee, chairs the board of Anduril, which, as a private firm, does not publicly release most of its financial data, although the Pentagon mandatorily discloses synopses of its military contracts.
Anduril was jump-started with a US Immigration and Customs Enforcement (ICE) contract to implant hundreds of autonomous surveillance towers along the US–Mexico border. Operated by Anduril’s trademark Lattice software, the devices alert border patrols to arrest migrating families and incarcerate them indefinitely in privately owned prisons without due process and under horrible conditions. ICE is also the foundation of Palantir’s surveillance empire, which shares technology with Anduril.
In January, the Department of Defense inflated Anduril’s capital investment pool with a $14.3 million gift or “grant,” adding to the array of multi-billion dollar, open-ended weapon systems contracts that the Department has awarded Anduril, Palantir, Microsoft, Oracle, Amazon Web Services, and their various partnerships, often without the budgetary benefit of competitive bidding. Remarkably, the Pentagon frequently classifies Anduril as a small business, which allows it to take advantage of sole sourcing and other contracting benefits, as we shall explain subsequently.
Like Palantir, Anduril’s bottom line is pumped by taxpayer subsidies. The state of Ohio has awarded Anduril $450 million in tax breaks for building its Arsenal-1 autonomous weapons manufacturing plant adjacent to a Columbus, Ohio, airport. In late February, a crowd led by Veterans for Peace and Ohio Nuclear Free Networks demonstrated at the Arsenal-1 factory site, protesting the use of state land and tax subsidies to finance the AI war machine. If, as it appears, Anduril is following Palantir’s model of deficit spending financed by ever-higher investor funding rounds, it also escapes paying federal income taxes.
Devilish Consortium Details
Patents contain useful information that often does not make it into mainstream news stories. The US Patent Office records nearly 3,000 (mostly software) patents for Palantir and ninety-seven for Anduril. Many of these patents concern AI-enabled weapons and surveillance devices. According to an Anduril patent filed with the European Patent Office in November 2024 to facilitate “multi-target detection,” Anduril’s and Palantir’s “newly formed consortium is set to tackle two fundamental barriers to AI adoption in national security.”
The first “barrier” is the unmet need to capture exabytes of surveillance live streams from global sensor networks in order to facilitate “future AI training and algorithm development.” Anduril is working to ensure that its Lattice software and Menace hardware products can link with the global communications and machine learning training systems envisioned by the All Domain Joint Awareness Command and Control paradigm, as fundamentally unworkable as the imagined task of globalized battle management is.
“The second challenge involves processing and scaling the retained data into actionable AI capabilities,” the patent document notes. Toward achieving that quixotic goal, Anduril intends to securely interface with Palantir’s cloud-based AI Platform to create “end-to-end infrastructures to transform raw data into operational AI capabilities.” In other words, the Anduril and Palantir war technologies are operationally intertwined, as are their investor and Pentagon funding structures and technological hype.
Regarding the inclusion of the increasingly militaristic so-called non-profit OpenAI into the AI weapons contracting apparatus, the patent approval document explains, “the Anduril and OpenAI strategic partnership will focus on improving counter-unmanned aircraft systems [as] leading edge AI modes [such as ChatGPT] can be leveraged to rapidly synthesize time-sensitive data, reduce the burden of human operators, and improve situation awareness.” To alleviate the cognitive “burden” on human commanders, the Chats will be trained to make decisions based upon Anduril’s “library of data on [drone] threats and operations.” Thiel’s interlocking companies are capitalizing on operational data from the wars their machines are currently fighting in Ukraine and the Middle East so they can sell more war machines that can function without having to include humans in making decisions.
In 2019, Anduril was awarded a Pentagon contract to take over the development of Project Maven, the infamous image parsing project originally run by Google, intended then and now to automate combat. In 2018, thousands of Google employees went public with ethical concerns about supporting a murderous weapon such as Project Maven, and Google dropped out of the project temporarily. (Since then, Google has abandoned its previous public pretense of not participating, full blast, in military and intelligence operations.) Anduril’s acquisition of the ever-growing Project Maven enterprise positioned the company and its partners to play an increasingly dominant role in the development of the global battle management system into which the ethically and technologically compromised Project Maven is incorporated.
The newest iteration of the joint Anduril-Palantir operated “Maven Smart System” is hypothetically engineered to store real-time combat data for training generative artificial intelligence machines capable of removing humans from combat decision matrices, which has been an ethical red line drawn by civil society foes of killer robots, as if the Pentagon cared. The Department of Defense pays lip service to placing ethical constraints on the use of AI in warfare, such as keeping humans in the increasingly automated kill chain loops that mesh Maven’s Palantir-assisted databases and Anduril’s armory of autonomous weapons. But the practice of war simply cannot overlap with the perspectival universe of moralities or ethics.
In 2023, Air Force Secretary Frank Kendall told a panel of AI weapons firms, “If the human being is in the loop, you will lose. You can have human supervision, you can watch over what the AI is doing. If you try to intervene, you are going to lose.”
And in February, a “senior Defense official” told Defense One, “We’re not going to be investing in ‘artificial intelligence’ because I don’t know what that means. We’re going to invest in autonomous killer robots.”
How Palantir Makes Money
Palantir’s SEC filings reveal that more than half of its annual revenue is from governments, but because the firm does a lot of classified military and intelligence work, it does not disclose many of its business practices. As of late March, federal procurement databases record Palantir holding master contracts with the federal government with a value of at least $1.8 billion, based on hundreds of military and intelligence agency contracts. These open-ended, difficult-to-audit, master contracts are called “Indefinite Delivery/Indefinite Quantity” contracts and can be shared by multiple firms. Government agencies order individual products and services from various vendors, sharing master contracts that span years.
Palantir’s Gotham, Foundry, Apollo, and Artificial Intelligence Platform products are not primarily designed to collect information nor to directly engage in combat (even though they are being sold as proficient in that domain). The programs are set up to analytically parse large databases of information already collected by governments and weapons corporations, such as Anduril. And the programs are highly subject to generating false positives and other types of errors, Palantir admits.
Humanitarian groups have credibly accused Palantir of abetting human rights violations on the US border. And, in 2019, Vice republished Palantir’s secret manual for police intelligence gathering, revealing the breathtaking scope of the information vacuumed up by Palantir’s domestic spying networks, mostly created to supposedly predict future criminal activities in socioeconomically marginalized populations.
In 2019, Facebook, Microsoft, Palantir, Mastercard, and Apple conspired to track as many as 1.1 billion people without government identification who are or may become climate refugees. That global digital surveillance operation is particularly aimed at surveilling indigenous populations persecuted by governments, such as the Rohingya in Myanmar. Whether or not Palantir’s products work as ethically as it claims, the public tends to believe that Thiel’s data-crunching programs are all-knowing.
Palantir is not unaware of the public’s distaste for its operations. In 2012, the stealth-oriented corporation established a Council of Advisors on Privacy and Civil Liberties. The current council is a study in privacy washing. Executive director Bryan Cunningham is an attorney for Palantir and a former CIA officer who staffed the National Security Council under Condoleezza Rice, who is also a Palantir advisor. Member Stephanie Pell was an author of the post-9/11 PATRIOT Act, which authorized unprecedented government surveillance of citizens. Tim Sparapani is a former director of public policy at Facebook and an officer of the Application Developers Alliance, a trade association. Other council members have business ties to law firms, Silicon Valley lobby groups, and cybersecurity consulting services. The “council” advises the Danish National Police about using Palantir products, and it consults with law enforcement agencies on the deployment of Palantir’s Automated License Plate Reader.
In a similar vein of bringing in government celebrities to provide a cloak of social respectability, in 2022, Palantir created a Federal Advisory Board composed of former military officials, including Admiral William McRaven, General Carter Ham, General Gustave Perna, Vice Admiral Peter Neffenger, and a panel of former federal officials, including Trump’s notoriously inept Coronavirus advisor, Deborah Birx. The “board” is tasked with advising Palantir on how to get more federal contracts.
Assisted by Karp and Cohen, it is clear that Thiel institutionalized his command of Palantir. The shareholding structure of Palantir requires that the seven-member board include a majority of “independent” members, which the owners Thiel, Karp, and Cohen are not. “Independent” board member Alexander Moore is a former director of operations for Palantir who owns significant Palantir stock, according to SEC insider trading filings. Alexandra Schiff is a former Wall Street Journal reporter who trades millions of dollars in Palantir stock, as does Lauren Friedman Stat, who was appointed to the board in 2012 after leaving Accenture, which has multiple Defense Department contracts. Eric Woersching is a Silicon Valley venture capitalist with substantial Palantir holdings.
While these board members may technically qualify as “independent,” they are not likely to thwart Thiel and Karp if they wish to keep cashing millions of dollars of Palantir stock in the company’s self-dealing program of enrichment that is not accessible to ordinary shareholders, who are also denied dividends.
Palantir did not respond to Military AI Watch’s requests for comment.
Is Anduril a Small Business?
According to Defense Department procurement databases, Anduril Industries participates in eleven indefinite contracts with the federal government valued at about $4 billion. Five of the contracts, valued at about $2.3 billion, are listed as benefiting from special considerations awarded to small businesses.
Affirmative actions created to benefit “disadvantaged” small businesses include “sole sourcing” and awarding “set asides,” i.e., a certain number of contracts are reserved for companies that are owned by “socially and economically disadvantaged individuals,” a category which includes women and disabled veterans. Small business-certified firms do not have to compete for federal contracts. According to Small Business Administration guidelines, companies selling “Military and Aerospace Equipment and Military Weapons” qualify as “small” if their annual revenue is less than $47 million.
Anduril is estimated to have earned about $1 billion in border control and defense department contracts in 2024.
* In November 2019, Anduril was awarded a $12 million Small Business Innovative Research contract for the Advanced Battle Management System Sensing Network to produce “a prototype to process vast quantities of data from thousands of sources.” The contract is sole sourced, i.e., it was specifically written to conform to Anduril, and no other bidders were allowed.
* In December 2023, under the small business program, Anduril sole sourced a $31 million delivery order for Wide Area Infrared System for Persistence Surveillance SkyFence.
* In October 2024, the Pentagon awarded a $400 million small business contract to Anduril and Invariant Corporation for developing a counter-unmanned aircraft system.
* In February 2025, Anduril was awarded a $99 million sole-sourced contract under the small business program for weapons systems using its Lattice software.
When queried by Military AI Watch, the Department of Defense declined to explain why it treats Anduril as a small business, i.e., as a beneficiary of affirmative action normally reserved for “minority”-owned concerns. Anduril did not respond to press queries.
Contracts issued under top-secret classification guidelines do not necessarily appear in publicly available datasets. A sample of publicly disclosed Pentagon contracts with Anduril reveals that it is not a small, disadvantaged business.
* September 2020: $950 million Anduril joint venture with Amazon Web Services, General Atomics Aeronautical Systems, and an array of smaller firms, such as EdgyBees, a $19 million military-oriented startup in Palo Alto. The contract finances “leveraging modern software and algorithm development to enable Joint All Domain Command and Control … as a unified force across all domains (air, land, sea, space, cyber and electromagnetic spectrum) in an open architecture family of systems.”
* January 2022: $967 million to Anduril for counter-unmanned systems deployed by the US Special Operations Command.
* July 2023: $900 million consortium with about 50 large and small firms “for yielding cost-effective warfighting capabilities.”
* April 2024: $249 million to an Anduril-led consortium with AeroVironment Inc. and Teledyne FLIR Detection Inc. for an autonomous war drone called Organic Precision Fires-Light System.
* June 2024: $982 million “to support current and future unmanned surface vehicle family of systems with 49 industry partners,” including Anduril.
* September 2024: $25 million to Anduril for “latticed mesh network and integration to additional space surveillance network sites.”
*September 2024: $9 million to develop the Altius loitering drone.
* December 2024: $100 million to Anduril under an Other Transaction Agreement (i.e., expedited and exempt from normal procurement oversight practices) to “scale its edge data integration services in the National Capital Region.”
* March 2025: US Navy awards Anduril a ten-year, $642.2 million contract to build a counter-drone system.
Anduril also has multimillion-dollar military contracts with foreign nations, i.e., the United Arab Emirates, Australia, and India.
Pentagon Cluster Bucks
Military AI firms, generative AI companies, and the large Cloud providers are making clearly non-competitive contracting alliances as they jockey for market power at the Department of Defense. For example, in November 2024, Palantir and Anthropic announced a partnership with Microsoft’s competitor, Amazon Web Services, to deploy artificial intelligence products for national security missions. A month later, Anduril and Anthropic competitor, OpenAI, announced a joint national security mission.
Anthropic and OpenAI are the targets of federal antitrust investigations due to heavy investments in each company by Microsoft, Amazon, and Alphabet-Google. (And both AI Chat companies had, until recently, prohibited the use of their products in warfare.)
When Anduril announced its AI weapons bidding consortium with Palantir in December, it revealed that Palantir’s Artificial Intelligence Platform powers the Maven Smart System that Anduril has been developing since 2019. The evolving Maven Smart System is designed to be incorporated into the JADC2-type battle management systems developed under master contracts with potentially hundreds of subawards to Microsoft, Oracle, Amazon Web Services, Palantir, and Anduril.
In February, Defense Scoop reported that Microsoft intends to transfer its $21 billion contract with the Army for developing virtual reality battle goggles to Anduril. The Thiel-funded firm will develop the headset weapon for use with Microsoft’s Azure warfighting cloud, and apparently not for compatibility with competitive military cloud capabilities variously offered by Google, Amazon Web Services, and Oracle. (If your head is spinning with possible antitrust connections, dear Reader, please consider that a lot of these contracting activities are not public, and the corporate inter-connections in the black budget are probably more complicated and inscrutable.)
The next head of the Joint Chiefs of Staff is slated to be a former Air Force general and CIA associate director, Dan Caine, a venture capitalist specializing in military investments. As of February, Caine is a partner at Shield Capital, which invests in AI weapons systems and is managed by Raj Shah, who formerly operated the Defense Innovation Unit (DIU). The Pentagon operates the DIU to catalyze investments in Silicon Valley AI weapons startups with federal funds.
Thiel Capital’s Michael Kratsios, a former Department of Defense executive who is the managing director of Thiel-funded Scale AI—a major AI weapons systems corporation that is loudly calling for smashing China—is nominated to head the White House Office of Science and Technology.
While exiting the White House, Biden’s national security advisor, Jake Sullivan, firmly endorsed the creation of AI warfare bidding consortia. Sadly, it was up to MAGA conspiracy theorist Laura Loomer to express a relative voice of reason. On Steve Bannon’s War Room podcast, Loomer observed, “The guys that are going in to advise on tech are openly tweeting about their investment portfolios in Palantir while they’re appointing all of their Palantir associates to serve in the Trump administration. These are inappropriate conflicts that we would be criticizing the Democrats for.”
Not that Loomer made the critique when she had the chance to act on it. Biden’s Director of National Intelligence, Avril Haines, was a highly paid Palantir consultant before taking over as chief of US espionage operations as the Pentagon amped up its contracting with Palantir, which was hired to provide the genocidal Israeli military with the AI software it uses to target Gazans who fall afoul of the algorithms, and the children and medical workers who rush to their aid.
Capital Matters
At the turn of the twentieth century, Americans raised regulatory barriers to unbridled greed, which required political will for enforcement.
Elite-class political support for the increasingly combinatory practices of Palantir, Anduril, OpenAI, Microsoft, Oracle, SpaceX, and Amazon Web Services will allow for the same type of regulatory failure that birthed the predatory monopolies of Standard Oil, Bell Telephone, Microsoft, and Amazon.com, as detailed by Lina Khan.
Public support, however, is not a given, especially as the AI weapons sausage-making is revealed. And for regulators looking to construct a prophylactic antitrust case: Consider that Palantir and Anduril are largely funded by the same venture capitalists; those prominent investors are also company managers; the intertwining firms sell similar military software products, often sharing applications and contracts. Importantly, both companies can and do operate at losses to maximize market share and thwart possible competitors. Combining their corporate resources can reduce research and operational costs, unfairly increase bidding power, reduce regulatory oversight, and hasten procurement processes to the detriment of competitive firms and the taxpayers.
In sum, the Palantir-Anduril consortium’s increasing ability to dominate the AI weaponry markets can not only harm competitive startups and smaller firms, but it can also, ironically, harm the equally monopolistic Primes, such as Lockheed Martin, Northrop Grumman, and Boeing, which are losing bids for AI weapons to companies wielding rings of power in the so-called Thielverse.
In his pathbreaking 1991 book, War in the Age of Intelligent Machines, philosopher Manuel DeLanda remarks, “Military institutions have mutated out of all proportion in this century … It is impossible to tell where the civilian sector begins and where the military ends.” According to DeLanda, AI-driven warfare is a “self-assembling” phenomenon— humans and machines tend to “self-organize” into a profit-seeking system of organized violence that follows nationalistic agendas. Military venture capital, too, is a complex adaptive system, collectively following xenophobic paths of least resistance. And, when these nation-anchored, financially over-concentrated systems collide globally, the result is threats, sanctions, bombs, chaos, and suffering beyond measure as so-called “democratic” and authoritarian governments, and their social media-addled populaces, collectively hallucinate toward a real nuclear war.
The problem is not the fact of AI technology. It is the persistence of a socioeconomic system that allows the likes of Thiel, Karp, Andreesen, Luckey, Musk, Zuckerberg, Bezos, the Kushners, Vance, and Trump to wield AI rings of power in service to weaponized capital and their necropolitical agenda.
In Middle-earth, the Ring was destroyed by popular mandate.
This is the first installment of a ten-part series exploring military AI. Next month, The Stargate Fiasco.
No comments:
Post a Comment