
Hello and welcome to Eye on AI. In this edition…China blocks Meta’s purchase of Manus…OpenAI falls short of its revenue and growth targets…Anthropic shows AI models can help advance AI safety research…Sen. Bernie Sanders’s decision to invite Chinese AI experts to a Capitol Hill panel provokes China hawks’ ire.
In their battle for enterprise sales, both OpenAI and Anthropic have been targeting financial services firms. That’s not surprising. As that old joke about why criminals rob banks says: It’s where the money is. OpenAI supposedly has a battalion of ex-investment analysts helping to build a yet-to-be-launched agentic AI financial analysis product. Anthropic has been rolling out financial modeling skills for its Claude Code, Cowork, and Claude for Finance products. Startup Samaya AI is building AI tools for the finance sector too. And there are plenty of new financial advisory tools using AI as well, as my colleague Jeff John Roberts has covered in this informative recent feature.
The OG of specialized financial data and analysis tools, of course, is Bloomberg. Access to the company’s “terminal,” as it calls its core product (even though its data is no longer delivered through a dedicated machine), is still considered the de rigueur tool of every trader, investment banker, and hedge fund quant.
Bloomberg’s tools have seen off lots of rivals since its founding back in 1981. But today, AI is supercharging the competitive pressure on the company, as rivals embrace AI-powered features and use AI models to rapidly ingest and analyze complex data sets, from bond prices to earning transcripts to social media feeds to satellite imagery, that once only Bloomberg consolidated in a single place—and as Bloomberg’s customers can increasingly use AI to perform the kinds of modeling they once needed the terminal to do.
For decades, getting the most out of the terminal required that traders memorized an arcane and bewildering set of three- and four-letter keyboard commands and shortcuts, each of which called up a different feature, function, or dataset. When I worked as a reporter at Bloomberg News, all new hires underwent a full week of training to introduce them to just a fraction of these functions, the bare minimum we would need to access the data and tools required for our jobs.
Even before I left the company to come to Fortune in 2019, Bloomberg had begun to use machine learning and large language models to make accessing these features far more intuitive, as well as to power new kinds of data analysis. And those efforts have only accelerated, especially since the debut of generative AI chatbots in 2022 and recent advances in agentic AI.
I have periodically written about Bloomberg’s progress on AI here at Fortune. But I was still surprised and impressed when I attended a recent “AI in Finance Summit” at the company’s London offices where it was showing off its new “AskB” feature, which the company bills as the biggest rethink of the terminal in Bloomberg’s history. AskB allows users to use natural language to navigate the terminal’s features and functions, but it does far more than this. The system acts as an agent, building investment screens and producing full research reports, including sophisticated financial modeling and bull and bear cases for a particular stocks, on the fly.
AskB, which uses a variety of AI models under the hood, including some built by Bloomberg itself and others from frontier AI model companies such as Anthropic, shows that Bloomberg is taking the potential threat from AI-native startups seriously. I sat down with Shawn Edwards, Bloomberg’s chief technology officer, to ask him more about how Bloomberg built AskB. Much of what he said holds lessons for enterprises in any industry that are trying to get agentic AI to deliver real business value.
Data is the differentiator
The first lesson is that data remains the critical differentiator. AskB pulls from Bloomberg News, sell-side research from over 800 providers, market data, and, increasingly, so-called “alternative datasets” that are hard or expensive to source. This includes things like anonymized credit card transactions, foot traffic in retail locations taken from cellphone pings, satellite imagery of parking lots, and app usage data. A lot of this data is not Bloomberg’s exclusively—it is buying it from other sources. But having it all in one place allows the AskB agent to do some powerful things, Edwards tells me, such as aligning this data with the business segments a public company reports in order to “nowcast” a company’s quarterly KPIs. Edwards relates that before Sweetgreen’s fourth-quarter 2025 earnings call, the alternative data was screaming that the chain would miss analysts’ consensus earnings forecasts—which it ultimately did. It’s an example of the power of pulling all this data together in one place.
When I asked whether customers could just use AI models to ingest this data and run these analyses themselves, obviating the need to pay Bloomberg’s approximately $30,000-per-user annual subscription price, Edwards said a few have tried and found it’s harder than it looks. “You have to buy all those sources, do all the validation work, build benchmarks—and tokens aren’t cheap. Most customers are saying, ‘Awesome, Bloomberg, you do that. I’m going to focus on my [own trading strategies].’”
That’s not to say that AI can’t help. Edwards told me AI agents have dramatically accelerated how Bloomberg builds data sets. Data ingestion that used to take four-and-a-half months now takes two days, he says. That’s freed up the large teams once dedicated to data entry and cleaning, many of whom have been redeployed onto building internal evaluations.
Build robust evaluations
Which brings us to the second big lesson: Building good internal evaluations is critical to deriving ROI from AI agents. “Evaluations, I cannot stress enough, are the make-or-break of building a useful, trustworthy system,” Edwards says, calling the emphasis on creating these evaluations one of the biggest “cultural shifts” Bloomberg has experienced in the past two years.
Building the evaluations isn’t easy—and it isn’t cheap. It requires close collaboration with domain specialists—in this case, bond covenant experts, equity analysts, market structure wonks, and even Bloomberg’s journalists—and engineering and product teams. Bloomberg was willing to pull these experts off their day jobs both to write benchmarks for sub-agents and to help evaluate entire workflows. Using AI models themselves as evaluators can work for easy cases, Edwards says. But for everything else, human assessors are required. Through building these evaluations, he says, Bloomberg is encoding its experts’ “tacit knowledge” in how its AI agents work.
Using multiple models can help contain costs
Next, cost discipline is fundamental. And that means workflows need to be multi-model. AskB uses a mix of commercial frontier models and open-weight ones, as well as its own internal models, routing queries to the cheapest model that can handle a given task with the kind of reliability and performance that workflow demands, Edwards says.
Finally, the next frontier is proactive. When I asked what’s coming, Edwards’s answer was agent-to-agent workflows and always-on data monitoring. He wants Bloomberg to be “the eyes and ears” for its financial customers—watching the world against each client’s positions, mandate, and strategy, and surfacing not just the obvious things but second- and third-order effects. A flood takes out a factory making parts for a supplier to a company whose stock you’re long on; AskB, in Edwards’s vision, would flag the problem to you before you’d thought to ask.
Achieving that vision will be difficult. But this kind of proactive, always-on agent is where a lot of businesses want to go. Bloomberg is showing some key steps along the path.
Ok, with that, here’s this week’s AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
But before we get to the news: Do you want to learn more about how AI is likely to reshape your industry? Do you want to hear insights from some of tech’s savviest executives and mingle with some of the best investors, thinkers, and builders in Silicon Valley and beyond? Do you like fly fishing or hiking? Well, then come join me and my fellow Fortune Tech co-chairs in Aspen, Colo., for Fortune Brainstorm Tech, the year’s best technology conference. And this year will be even more special because we are celebrating the 25th anniversary of the conference’s founding. We will hear from CEOs such as Carol Tomé from UPS, Snowflake CEO Sridhar Ramaswamy, Anduril CEO Brian Schimpf, Yahoo! CEO Jim Lanzone, and many more. There are AI aces like Boris Cherny, who heads Claude Code at Anthropic, and Sara Hooker, who is cofounder and CEO of Adaption Labs. And there are tech luminaries such as Steve Case and Meg Whitman. And you, of course! Apply to attend here.
FORTUNE ON AI
Anthropic says engineering missteps were behind Claude Code’s monthlong decline after weeks of user backlash—by Beatrice Nolan
Cohere’s European push highlights the rise of AI’s middle powers beyond the U.S. and China—by Sharon Goldman
DeepSeek unveils its newest model at rock-bottom prices and with ‘full support’ from Huawei chips—by Nicholas Gordon
Exclusive: AI-powered recruiting startup Dex raises $5.3 million seed round—by Jeremy Kahn
I used Claude’s new Dispatch feature for a month. Here’s everything I was able to do—by Catherina Gioino
Commentary: Mark Zuckerberg is building an AI clone of himself. Most people just need help with their inbox—by Mukund Jha
AI IN THE NEWS
Microsoft and OpenAI revamp their partnership. Microsoft and OpenAI have significantly reworked their partnership, ending the exclusivity that Microsoft once had over OpenAI’s tech. OpenAI can now sell its models through other cloud providers rather than relying solely on Microsoft’s Azure, and it no longer has to share all its research and other innovations with Microsoft. Microsoft is reportedly keeping its rights to 20% of what OpenAI earns, while the tech giant no longer has to give OpenAI a share of its own revenues from selling OpenAI-powered products. Microsoft still retains its equity stake in OpenAI’s for-profit company, as that company eyes a possible IPO later this year. Microsoft also secured the removal of the “AGI clause,” which would have cut it off from OpenAI’s technology if OpenAI declared it had achieved human-like artificial general intelligence. The changes give OpenAI more freedom to pursue deals with rivals such as Amazon Web Services and Google Cloud, as it has already started doing, strengthening its path toward higher revenues and a potential IPO. Read more from the Financial Times here.
OpenAI missed revenue and growth targets. OpenAI has missed internal targets for both user growth and ChatGPT revenue, leading both the company’s CFO Sarah Friar and board directors to question whether the company will be able to meet the roughly $600 billion in future data-center commitments it has made, the Wall Street Journal reported, citing people familiar with the discussions. Friar and board members have reportedly pushed for tighter financial discipline and questioned the pace of infrastructure spending and whether a year-end IPO is realistic, the paper said. Meanwhile OpenAI CEO Sam Altman has reportedly insisted that aggressive compute investment remains essential. The revenue and user growth slowdown—driven by stronger competition from Google and Anthropic—has sharpened scrutiny of OpenAI’s strategy, though the company says its business remains strong and points to growing traction for products like Codex and its latest model, GPT-5.5.
Google inks deal allowing Pentagon to use Gemini “for any lawful purpose.” That’s according to a scoop from The Information. The agreement, which expands the U.S. military’s ability to use Google’s AI models to cover classified networks, marks a major shift from the company’s earlier resistance to military AI work. The prospect of a deal had sparked an employee backlash, with more than 600 Googlers signing a letter urging CEO Sundar Pichai to reject it. A similar revolt against Google working with the military led to Google pulling out of the military’s Project Maven contract in 2018. The new agreement means Google has joined OpenAI and xAI as Pentagon AI suppliers, although the Google agreement appears to give the government broader authority to modify Google’s AI safety filters than comparable OpenAI arrangements, the publication said. The arrangement also leaves Anthropic as the only frontier AI model company that has so far resisted the Pentagon’s insistence that model makers agree to the “any lawful purpose” contract language.
Chinese competition regulator blocks Meta’s purchase of agentic AI company Manus. China has blocked Meta’s roughly $2 billion acquisition of Manus, ordering the deal unwound even after employees had joined Meta and Manus’ original investors had already been paid. The move underscores how aggressively China is tightening control over AI as a strategic technology, especially when domestic startups attempt to “Singapore-wash” their identity, moving their headquarters to the island nation in order to attract foreign capital, chips, or buyers. The decision highlights the accelerating decoupling of U.S. and Chinese AI ecosystems, with founders increasingly caught between U.S. investment restrictions and Beijing’s growing scrutiny of overseas restructurings. For insightful analysis of the decision, see this piece by Fortune’s Asia editor Nicholas Gordon.
Musk-OpenAI trial over OpenAI’s for-profit status begins. The trial started this week in a California courtroom. With most of Elon Musk’s claims having either been dismissed or dropped by Musk’s legal team, the case will hinge on whether emails and other communication between OpenAI cofounders Sam Altman and Greg Brockman and Musk established a charitable trust. Most legal experts think Musk is unlikely to prevail and, during jury selection, many potential jurors expressed negative opinions of Musk while few seemed to know much about Altman. For more on the trial, see this story from Fortune’s Eva Roytburg.
EYE ON AI RESEARCH
Anthropic shows progress on using AI to automate AI safety research. In a blog post and accompanying research paper, the company said a group of researchers it sponsored showed that Claude Opus 4.6 could help design and carry out research that pointed towards way to address a difficult problem in AI safety: How can a weaker intelligence, whether that is an AI model, or potentially a person, supervise a more intelligent AI model? Nine parallel “Automated Alignment Researcher” instances of Claude, which were equipped with some tools for carrying out the research, were each nudged toward a slightly different starting hypothesis. The Claudes then had to carry out the research using Alibaba’s open weight model Qwen 3-4B Base as the strong AI model, and Qwen 1.5-0.5B-Chat as the less capable, supervising model. They were allowed to spend seven days hypothesizing experiments and then the results were compared to what two human AI safety researchers had been able to do in a similar timeline.
The Claudes were tested on whether they could get the stronger model to perform on set of tests at the best of its ability, despite the weak model itself performing far worse at these tasks. The Claudes, collectively, did well, finding ways to get the weak model to coax the strong model to recover 97% of the “performance gap” between the weak and strong model, while the human AI researchers only managed to close 23% of this gap. What’s more, the methods generalized to unseen math and coding tasks, but they did not generalize to a different model. Also, the researchers sometimes caught the Claudes trying to cheat by simply instructing the strong model directly rather than figuring out ways to get the weak teacher to supervise the strong model. While not a perfect result, the total compute cost of the experiments the Claudes ran was $18,000, which Anthropic argued could mean that these automatic techniques might still be helpful in finding new research directions for humans to pursue.
AI CALENDAR
April 23-27: International Conference on Learning Representations (ICLR), Rio de Janeiro, Brazil.
April 22-24: Google Next, Las Vegas.
June 8-10: Fortune Brainstorm Tech, Aspen, Colo. Apply to attend here.
June 17-20: VivaTech, Paris.
July 6-11: International Conference on Machine Learning (ICML), Seoul, South Korea.
July 7-10: AI for Good Summit, Geneva, Switzerland.
BRAIN FOOD
Bernie Sanders tries to push international AI governance forward as the China hawks circle. Vermont Sen. Bernie Sanders is hosting a panel discussion on Capitol Hill later this week on AI’s risk and the need for international agreement on how to govern the technology. Unusually for Washington, Sanders has invited two leading Chinese AI governance experts to appear on the panel, a decision that has drawn praise from those who see outreach to China as critical for ensuring AI does not present catastrophic risks, as well as criticism, particularly from China hawks who see the U.S. locked in a zero-sum technological arms race with China. Those critics have pointed out that the two Chinese experts Sanders invited are linked to the government’s Ministry of Science and Technology’s AI governance committee. Sanders has been trying to push forward a bill that would impose a moratorium on further AI data center construction until federal AI regulations are enacted.
It’s unclear whether Sanders’ decision to include Chinese experts on this panel is smart politics. Polls have consistently shown that a majority of Americans have a negative view of AI overall and many local communities have opposed data center construction. Bipartisan support seems to be building for some kind of AI regulation, especially around childrens’ interactions with chatbots and around concerns about AI displacing workers. In this context, Sanders may think this is a good opportunity to publicly highlight AI’s catastrophic risks and show that the Chinese, who have passed some of the strictest domestic AI regulation, are willing to discuss AI governance that might collectively slow the further capability advances in the technology. But it could be that the move backfires, reinforcing concerns about China dominating the technology and alienating potential allies. As Michael Sobolik, a China policy expert at the right-wing Hudson Institute told Fox News, “I think Sanders’ concerns about AI are overstated, but I respect them. We should be asking questions about child safety, community impact, and economic displacement. What we shouldn’t do is partner with foreign adversaries like the Chinese Communist Party in those discussions.”
#Lessons #build #agents #Bloomberg #CTO #Shawn #Edwards