If you strip away the flags and the press briefings, genocide looks a lot like product-market fit. Not morally—but operationally. The incentives are aligned. The demand is high. The friction is low. The exits, when they happen, are slow and symbolic.
Let’s be blunt: mass displacement, militarized occupation, and state-led ethnic cleansing don’t just happen in failed states. They’re systematized through Western-backed infrastructure. Logistics platforms. Telecoms. AI. Insurance. Payment rails. Resource extraction.
Every one of these sectors has a monetization surface in conflict zones—and that monetization isn’t new. It’s legacy. From the railroads of the American frontier to King Leopold’s Congo to IBM supplying punch-card systems for the Holocaust, the playbook hasn’t changed. Only the UX has. The latest iteration? A tightly integrated stack of Western companies enabling Israel’s assault on Gaza—and doing it while pitching quarterly growth to their boards.
The logic isn’t subtle. Surveillance vendors tout “border-grade AI.” Bulldozers that level homes are supplied under “civil infrastructure” contracts. Cloud platforms store biometric data under the pretense of “security services.” Arms manufacturers call their products “defensive.” Francesca Albanese’s UN report names more than 60 Western firms operating in the occupied Palestinian territories. Their roles span software, logistics, finance, and physical infrastructure. But the issue isn’t just who got named. It’s how normalized the flywheel has become.
Companies now routinely onboard for war logistics the same way they’d onboard for retail logistics. Procurement, contract, compliance, delivery, renew. If it sounds like AWS, that’s because it is—this is war-as-a-service.
The real innovation here is abstraction. These aren’t frontline actors. They’re back-office operators: the payment processor handling settler real estate, the telecom firm routing signals, the CRM system helping an occupying force track its “user base.” And once abstracted, participation no longer feels like complicity. It feels like ops.
The people who leave aren’t the ones with leverage. It’s always the low-equity, values-aligned participants. A junior engineer who refuses to work on a surveillance module. A small activist pension fund dumping holdings. A few minor shareholders filing resolutions that don’t pass. What never leaves? The capital. The institutional funding. The defense stack. The pipeline partners.
There’s a reason for that: the risk is manageable and the rewards are stable. Military budgets don’t shrink in war. Sanctions don’t touch U.S.-backed allies. And media blowback rarely hits upstream vendors. It’s safer to sell database infrastructure for apartheid enforcement than for TikTok.
This is what happens when ethical exposure gets priced in as a reputational risk—but not a contractual one. The contracts are sticky. The margins are healthy. The consequences are someone else’s problem. That’s not product-market misalignment. That’s the product working exactly as designed—for the buyer, not the victim.
This isn’t unique to Israel or Gaza. The same logic powers Xinjiang’s surveillance economy. It shows up in Indian-administered Kashmir, where telecom shutdowns are normalized and military contracts are routed through civil infrastructure budgets. It appears in U.S. border operations, where Palantir, Anduril, and dozens of vendors support ICE deportation infrastructure with cloud-backed AI.
In every case, the product pitch is the same: scale governance through efficiency. That’s code for: “make it faster, cheaper, and more permanent.”
Once these systems are in place, they metastasize. The predictive policing model used in Ferguson, Missouri, finds its cousin in Hebron. The emotion-recognition AI tested in Chinese schools shows up in West Bank checkpoints. Surveillance tech is like a SaaS product—it’s built once, and then sold repeatedly with only slight localization.
And if you’re building product without specifying limits, you’re building for every regime that can pay.
Most founders think ethical exposure is for boards and PR teams to worry about. They’re wrong. The platforms that power repression today are mostly founder-built. Fast-scaling startups that landed defense or government contracts early, took follow-on capital, and now can’t unwind the dependency without tanking their runway. What starts as “mission alignment” ends up as margin lock-in.
And it’s not just defense tech. Look at fintechs underwriting settler mortgage loans. HR platforms used to vet West Bank laborers. Insurance carriers backing infrastructure in occupied zones. Even e-commerce APIs used in militarized supply chains. Founders love to talk about user personas. Here’s a test: if your largest customer profile is a state entity accused of war crimes, that’s not a niche. That’s a warning.
Growth-stage VCs should pay attention, too. The exit risk for these companies isn’t just public scrutiny—it’s regime volatility. What happens when the occupying power loses legitimacy or collapses? Who absorbs the data, the liability, the audit trail? If your platform only works when someone’s rights are being violated, it’s not a platform. It’s an accomplice.
Norway’s sovereign wealth fund has long served as a barometer for ethical capital posture. They divest from companies that breach environmental, human rights, or corruption thresholds—regardless of performance. Their logic is simple: long-term fiduciary health depends on risk-adjusted governance, not quarterly earnings.
It’s not perfect, but it creates pressure. And it sends a signal that passive capital does not have to be neutral. Contrast that with most U.S. asset managers—especially BlackRock, Vanguard, and State Street. They treat weapons contracts and human rights allegations as secondary filters. If the ETF performs, the holdings stay. There’s no escape clause for genocide.
Meanwhile, in China, state-aligned capital is using this moment to backfill Western retrenchment in MENA and Southeast Asia. That includes infrastructure, telecoms, and yes—surveillance tech. But here’s the kicker: they’ve learned from the West’s opacity and branded their models with nationalist logic. It’s not more ethical. It’s just more strategic.
Here’s the real tension: the same logic that makes your platform scale also makes it complicit. If you don’t define where the product stops, the market will decide for you. And it won’t care about your values deck.
This is the real growth trap: you build a system that works so well at scale that you can’t say no to who’s using it. That’s not just moral decay. That’s governance failure at the product layer.
What founders should build instead: systems with friction, auditability, and exit clauses. If your product could be used in a genocide, it needs a shut-off switch. If your infra is helping bulldoze homes, stop pretending it’s neutral.
Because here’s what the market won’t tell you:
You don’t get to disown the use case once you’ve onboarded it.