There’s an Israeli army technique referred to as the “fog process”. First used throughout the second intifada, it’s an unofficial rule that requires troopers guarding army posts in circumstances of low visibility to shoot bursts of gunfire into the darkness, on the speculation that an invisible risk is likely to be lurking.
It’s violence licensed by blindness. Shoot into the darkness and name it deterrence. With the daybreak of AI warfare, that very same logic of chosen blindness has been refined, systematized, and handed off to a machine.
Israel’s current conflict in Gaza has been described as the primary main “AI conflict” – the primary conflict wherein AI methods have performed a central position in producing Israel’s checklist of purported Hamas and Islamic jihad militants to focus on. Techniques that processed billions of knowledge factors to rank the chance that any given particular person within the territory was a combatant.
The darkness within the watchtower was a situation of the terrain. The darkness contained in the algorithm is a situation of the design. In each instances, the blindness was chosen. It was chosen as a result of blindness is beneficial: it creates deniability, it makes the violence really feel inevitable, it strikes the query of who determined from an individual to a process. The fog didn’t carry. It was given a chance rating and referred to as intelligence.
It could have been chosen blindness that led, at the beginning of the US-Israeli Iran conflict, to the strike on the Shajareh Tayyebeh elementary college in Minab, in southern Iran. Not less than 168 folks had been killed, most of them youngsters, ladies aged seven to 12.
The weapons had been exact. Munitions specialists described the concentrating on as “extremely correct”, every constructing individually struck, nothing missed. The issue was not the execution. The issue was intelligence. The varsity had been separated from an adjoining Revolutionary Guard base by a fence and repurposed for civilian use almost a decade in the past. Someplace within the concentrating on cycle, it appears that evidently reality was by no means up to date.
The precise position of AI within the strike on Minab has not been formally confirmed. What is understood is that the concentrating on infrastructure wherein these methods function has no dependable mechanism for flagging when the underlying intelligence is a decade old-fashioned.
Whether or not or not an algorithm chosen this college, it was chosen by a system that algorithmic concentrating on constructed. To strike 1,000 targets within the first 24 hours of the marketing campaign in Iran, the US army relied on AI methods to generate, prioritize, and rank the goal checklist at a pace no human staff might replicate.
Gaza was the laboratory. Minab is the market. The result’s a world wherein probably the most consequential concentrating on selections in trendy warfare are made by methods that can’t clarify themselves, equipped by firms that reply to nobody, in conflicts that generate no accountability and no reckoning. That’s not a failure of the system. That’s the system.
Who’s guilty when AI kills?
We should always resist the temptation to solely blame the algorithm for the logic that makes youngsters into acceptable error charges. In July 2014, 4 boys from the Bakr household – Ismail, Zakariya, Ahed and Mohammad, aged 9 to 11 – had been killed on a seaside in Gaza. No AI was concerned. The positioning had been preclassified as a Hamas naval compound. The boys had been flagged as suspicious as a result of they ran, then walked – habits that matched a concentrating on template for fighters attempting not to attract consideration. When the primary missile hit, the surviving youngsters fled. The drone adopted them and fired once more. An officer later testified that from a vertical aerial view, it is extremely laborious to determine youngsters. The strike was logged as a concentrating on error.
A categorised Israeli army database, reviewed by the Guardian, +972 Journal and Native Name, indicated that of greater than 53,000 deaths recorded in Gaza, named Hamas and Islamic Jihad fighters accounted for roughly 17%. That means the remainder, 83%, had been civilians. These usually are not the statistics of a conflict fought with precision, it is a conflict the place imprecision is the purpose. (The IDF disputed figures offered within the Guardian article though they didn’t determine which figures.)
So AI concentrating on methods didn’t invent this logic. They inherited it, encoded it throughout tens of millions of knowledge factors, and automatic it past any significant human examine. When a college in Minab is assessed in a database as a army compound, that isn’t a malfunction. It’s the fog process, the identical logic that chased 4 boys down a seaside in Gaza – operating precisely as designed, at a unique scale, in a unique nation, with a unique weapon. The darkness simply has higher {hardware} now.
Many of those AI methods inherently defy worldwide humanitarian legislation, which doesn’t merely demand appropriate outcomes from army operations; it requires a cautious course of earlier than they’re carried out. A commander should make each cheap effort to confirm {that a} goal is a legit army goal. The legislation additionally requires that all the things possible be finished to guard civilians from the consequences of assault, not as an afterthought, however as a parallel and equal obligation.
That obligation can’t be delegated to a system whose reasoning is opaque and whose outputs can’t be interrogated in actual time. In Gaza, an algorithm processed knowledge on each particular person within the strip – telephone information, motion patterns, social connections, behavioral indicators – and produced a ranked checklist of names, every assigned a chance rating indicating the chance they had been a combatant. This isn’t the identical as a human analyst figuring out a recognized militant and programming a weapon to hit them. The AI was not confirming identities. It was inferring them, statistically, throughout a whole inhabitants, producing targets that no human had individually assessed earlier than they appeared on the checklist.
Verification, on this system, meant a human operator reviewed every title for a median of about 20 seconds, lengthy sufficient to substantiate the goal was male. Then they signed off. One system alone produced greater than 37,000 targets within the first weeks of the conflict. One other was able to producing 100 potential bombing websites per day. The people within the loop weren’t exercising judgment. They had been managing a queue.
In Iran, the image is, at the moment, much less absolutely documented. However the scale tells its personal story. Two sources confirmed to NBC Information that Palantir’s AI methods, which draw partially on giant language mannequin know-how, had been used to determine targets. (Palantir’s CEO, Alex Karp, stated he “can’t go into specifics” when requested about this on CNBC, however stated that Claude was nonetheless built-in into Palantir’s methods used within the Iran conflict.) Brad Cooper, head of the US Central Command, has boasted that the army is utilizing AI in Iran to “sift by way of huge quantities of knowledge in seconds” in an effort to “make smarter selections sooner than the enemy can react”. Whether or not or not each strike was AI-assisted, the tempo of the marketing campaign was solely potential as a result of concentrating on had been considerably automated.
When reported verification occasions for AI-assisted targets are measured in seconds, we’re now not speaking about human judgment with algorithmic help. We’re speaking about rubber-stamping a machine’s output. And when that machine’s knowledge is a decade old-fashioned, the implications are written in rows of small coffins.
The businesses implicated on this usually are not obscure protection startups. Palantir, based with early CIA funding and now one of many main AI infrastructure suppliers to the US army, equipped methods used within the Iran marketing campaign. These methods draw partially on Anthropic’s Claude, a big language mannequin whose mum or dad firm tried to withstand Pentagon stress to take away moral constraints on its use for concentrating on. The Pentagon responded by threatening to chop ties and turning to OpenAI and others as a substitute. The marketplace for killing at scale doesn’t lack for suppliers.
The episode is instructive: the one firm that attempted to attract a line was sidelined, and the killing continued with out interruption. Google, regardless of important inside worker protest, signed Mission Nimbus, a cloud-computing and AI contract with the Israeli authorities and army value greater than $1bn.
Amazon is a co-signatory to Mission Nimbus alongside Google. Microsoft had deep integration with Israeli army methods earlier than partially withdrawing beneath stress in 2024, at which level the info migrated to Amazon Net Providers inside days.
Anduril, based by Palmer Luckey and staffed closely with former US protection officers, builds autonomous weapons methods explicitly designed for deadly concentrating on. OpenAI, which till not too long ago prohibited army use in its phrases of service, quietly eliminated that restriction in early 2024 and has since pursued Pentagon contracts. These are among the many Most worthy firms on this planet, with shopper merchandise utilized by a whole bunch of tens of millions of individuals, college analysis partnerships, and important political affect in Washington, Brussels and past.
In fact personal firms have equipped militaries for hundreds of years – with radios, vans, satellite tv for pc navigation, microwave know-how and, after all, advanced weapons methods. This isn’t new or inherently corrupt. The “dual-use” downside is as outdated as industrialization: virtually any highly effective know-how can be utilized for army ends.
However AI concentrating on just isn’t merely a element that militaries incorporate into their operations. It’s the choice structure itself – the factor that determines who will get killed and why. When a single system can generate tens of 1000’s of targets within the time it could have taken a human intelligence staff to confirm 10, the query just isn’t whether or not personal firms ought to provide militaries. It’s whether or not any authorized framework can survive contact with it.
In worldwide legislation we discuss accountability frameworks: the chain of answerability that runs from a call to make use of deadly power again to the one who licensed it. An accountability framework requires that somebody be identifiable because the decision-maker, that their reasoning be reconstructable after the very fact, and that the method obligations the legislation calls for – proportionality evaluation, verification, precaution – will be proven to have been adopted.
AI concentrating on systematically destroys every of those circumstances. Attribution dissolves throughout a series of engineers, commanders, operators and company suppliers, every of whom can level to a different. Reasoning disappears right into a chance rating that no lawyer can audit and no courtroom can cross-examine. Course of collapses right into a 20-second approval of a machine suggestion. And the businesses that constructed and offered the system sit fully outdoors the authorized framework, as a result of worldwide humanitarian legislation was designed for states and their brokers, and Palantir just isn’t a signatory to the Geneva conventions.
The accountability framework has not been merely strained or examined by AI warfare. It has been made structurally irrelevant.
Lifting the fog of conflict
We should always cease calling these know-how firms and begin calling them what they’re: protection contractors.
The biggest AI corporations usually are not impartial infrastructure suppliers who occurred to discover a army buyer. They’re being built-in into the concentrating on structure of recent warfare. Their methods sit contained in the kill chain, their engineers maintain safety clearances, their executives rotate by way of the identical revolving door that has at all times related Silicon Valley to the Pentagon.
These AI suppliers are on the reducing fringe of the military-industrial advanced, and ought to be regulated as such. A transparent accountability chain applies to corporations resembling Raytheon and Lockheed Martin – entailing export controls, congressional oversight, legal responsibility frameworks and procurement circumstances – whereas the weak rules that apply to the businesses writing the algorithms that choose army targets have by no means been utilized, examined or enforced.
That’s not an oversight. It’s a alternative, actively maintained by lobbying, by the deliberate blurring of “industrial” and “protection” merchandise, and by a regulatory tradition that also treats AI as a shopper know-how that occurred to search out its solution to the battlefield. Palantir spent near $6m lobbying Washington in 2024, and in a single quarter of 2023 outspent Northrop Grumman. It launched a devoted basis to form the coverage setting it operates in. The consortium of Palantir, Anduril, OpenAI, SpaceX and Scale AI was described by its personal members as a mission to produce a brand new technology of protection contractors to the US authorities. The enterprise capital corporations backing these firms, Andreessen Horowitz and Founders Fund, have cultivated affect by way of proximity to energy: former senior officers on their advisory boards, companions rotating by way of authorities roles and direct entry to the policymakers who decide how a lot the Pentagon spends and on what.
The EU AI Act, probably the most formidable try but to control synthetic intelligence, explicitly exempts army and nationwide safety functions, with the acknowledged justification that worldwide humanitarian legislation is the extra acceptable framework. It’s a outstanding act of circularity: the one physique of legislation being systematically destroyed by these methods is designated as their regulator, whereas the regulators who would possibly really constrain them look away.
In the USA, the AI provisions of the 2025 Nationwide Protection Authorization Act don’t regulate army AI. They direct companies to undertake extra of it. Pete Hegseth’s AI technique, issued in January 2026, frames the query fully as a race, directing the Pentagon to maneuver at wartime pace, with AI as the primary proving floor. The regulatory tradition has not did not meet up with the know-how. It has determined, intentionally, to not strive.
To date, the one severe authorities intervention in AI army functionality we’ve seen got here not from a state demanding restraint or accountability, however from the US demanding the methods be made extra deadly. That’s the horizon of ambition we’ve accepted.
Banning these methods outright is inconceivable when so most of the actors concerned care little about worldwide legislation. However stress factors stay, and they’re actual. Any future authorities in Washington that desires to make use of AI army functionality with out producing an never-ending collection of Minabs will want a regulatory framework – not as a concession to critics however as a fundamental requirement for not changing into a rogue actor. The identical is true in Europe, the place Britain has dedicated over £1bn to a brand new AI-integrated concentrating on system connecting sensors and strike capabilities throughout all domains, and the place France’s main AI firm has partnered with a German protection startup to construct autonomous weapons platforms, and the place Germany is deploying AI-guided assault drones in Ukraine.
There’s a gap to manage these methods. The EU has the obvious instruments, not by way of the AI Act, which intentionally exempts army functions, however by way of export controls and procurement circumstances on the dual-use methods that transfer between industrial and protection markets. Worldwide courts are starting to open doorways too: the ICJ advisory opinion on Palestinian rights has created a framework wherein firms supplying methods utilized in illegal strikes face potential legal responsibility publicity in jurisdictions that take worldwide legislation critically. And AI corporations want governments, not simply as prospects however because the suppliers of the computing energy, the vitality, and the bodily infrastructure that frontier AI requires and that no firm can maintain from industrial revenues alone. That dependency offers states which might be prepared to make use of it actual leverage over firms that would like to not be regulated. The query is whether or not any authorities with the instruments to behave will determine, earlier than the following Minab, that the price of inaction has grow to be too excessive.
What regulation ought to appear like is comparatively easy, even whether it is laborious to implement. AI methods utilized in concentrating on have to be explainable – not through chance rating however reasoning {that a} lawyer can audit. The cumulative civilian value of AI-assisted campaigns have to be assessed as an entire. And the legal responsibility that stops on the operator should prolong up the provision chain to the businesses that knowingly constructed and offered opaque methods to be used in armed battle. These usually are not novel calls for. They’re the minimal circumstances for the legal guidelines of conflict to imply something within the age of algorithmic concentrating on.
Within the meantime, the fog process is operational and coming to outline the way forward for conflict. However the troopers who fired into the darkness had been a minimum of current in it. The businesses that constructed what changed them are doing it from Palo Alto, at no private threat, with no authorized publicity, and with each incentive to do it once more.
-
Avner Gvaryahu is a DPhil researcher on the Blavatnik college of presidency, College of Oxford. He’s a former govt director of Breaking the Silence, an Israeli human rights group of former troopers

