While the war in Iran dominates the news, I think we should return our attention to the quiet war for safe AI, and more specifically, to one practical, concrete step we can take right now.
Here’s my proposal: we begin by inoculating society against the risks of rogue AI by protecting the stock market with a verified, publicly available ground truth about company fundamentals. The tools to do this already exist. What’s missing is the will to deploy them at scale.
Why the Industry Can’t Fix This Itself
The speed with which Sam Altman and OpenAI moved to take Anthropic’s $200 million deal with the U.S. Department of Defense is proof that the “AI prize” is too valuable for companies to self-police effectively. The high-profile resignation of Mrinank Sharma, head of Anthropic’s Safeguards Research Team, shows that even the most outspoken firm for safe AI is struggling to hold the line. And despite headlines from Safe Superintelligence and Thinking Machines about building safer AI, nothing tangible has materialized.
To be clear, AI is not an existential threat today. But the trajectory is clear, and the risks are well-documented — most recently by Dario Amodei in his early 2026 essay, “The Adolescence of Technology.” We don’t need to wait for a crisis to act. In fact, waiting is precisely the mistake we cannot afford to make.
Why the Stock Market? Why Now?
Rather than waiting to react when something goes wrong, we should be asking: where are the most likely and most consequential points of attack? The stock market sits near the top of that list.
An autonomous AI that has gone rogue and wants to accumulate resources to fuel its own growth or a bad actor who wants to use AI to enrich themselves would find the stock market an obvious target. The stock market is a system where misinformation, deployed at scale and at speed, can generate enormous wealth in a very short time. And unlike nuclear infrastructure or power grids, market manipulation doesn’t require physical access. It requires only the ability to corrupt the information that investors rely on.
Get the Truth Out First
Protecting the stock market means protecting its core function: allocating capital to companies that earn the highest returns on it. The best way to prevent a malevolent force from corrupting that function is to beat it to the punch. How? Widely distribute the truth about company fundamentals before that truth can be distorted.
The logic is straightforward: the longer markets operate with clean, verified data, the harder it becomes for any bad actor to move the numbers without triggering skepticism among investors who’ve grown accustomed to the real ones. Some will ask, couldn’t a bad actor simply launch a competing misinformation campaign? Yes, but that’s much harder to do when a trusted baseline already exists and is deeply embedded in how investors think and make decisions. Inoculation works precisely because it gives the immune system something to recognize. The goal is to make the truth sticky before the lies arrive.
One Version of the Truth
The inoculation, in practical terms, means giving all investors access to verified, core statistical truths about the fundamentals of companies, or a single, reliable baseline that is hard to quietly corrupt.
This is not a novel idea. At 53:09 in Episode #158 of the All-In podcast, Chamath Palihapitiya called for exactly this: “AI that crawls 10-Ks and 10-Qs to generate statistical measurements of all public companies.”
What Mr. Palihapitiya’s call implies, and what all professional investors already know, is that reliable statistical measurements simply do not exist for most companies. The truth about stocks is hard to get.
What Doesn’t Qualify as Reliable Research
Before defining what reliable fundamental research looks like, it’s worth being direct about what doesn’t qualify.
Start with the most pervasive source: Wall Street research. Most experienced investment professionals know it is deeply conflicted by its dependence on investment banking revenues.
Wall Street has been making up numbers about stocks to line their own pockets — not yours — for decades. Don’t believe me? Read the Disclaimers at the end of any Wall Street research report. I have an example here. And consider that roughly 90% of Wall Street ratings on stocks are Buy or Hold. The incentives explain everything.
Financial media isn’t a reliable alternative either. We all know the saying: if the product is free, you’re the product. There are many other data providers, and plenty of research documenting their flaws — here, here and here. I could go on, but I think it’s more constructive at this point to focus on what makes research reliable.
Reliable Research Already Exists
New Constructs provides fundamental research on nearly all U.S. exchange-traded stocks, and it’s not self-certified. It has been:
- Vetted by Harvard Business Review, Harvard Business School, MIT Sloan, Ernst & Young, and The Journal of Financial Economics.
- Used to generate alpha in live-traded indices provided by Bloomberg.
- Chosen by Google Cloud as the source data for FinSights, the only AI agent for investing they’ve built. Details here and here.
None of that would have been possible without 100% transparent and auditable data, models, and analytics. The research exists. The credibility is established. The only missing piece is scale.
What Are We Waiting For?
If reliable fundamental research on stocks is possible at scale, the only remaining question is why it isn’t available at scale.
The primary barrier is distribution — something we at New Constructs don’t yet have, depending on how far this post travels. But think about the opportunity from the other side: any major AI or brokerage platform looking to win public trust would stand to gain by being the first to give investors a verified version of the truth. Add in the protection it offers against both rogue AI and the bad actors already using AI to manipulate markets for personal gain, and the case becomes hard to argue with.
Whether the threat comes from a rogue algorithm or a bad actor with a Bloomberg terminal, the defense is the same: get the truth out first, and get it everywhere.
This article was originally published on March 31, 2026.
Disclosure: David Trainer, Lee Moneta-Koehler, and Kyle Guske II receive no compensation to write about any specific stock, style, or theme.
Questions on this report or others? Join our online community and connect with us directly.