I round up the most relevant AI-in-finance news - the deals being done, who’s rolling out what, and what’s actually working on the front lines

Apollo capped investor redemptions from its flagship private credit fund this week…

…$1.6 billion in withdrawal requests. Only half honoured. The SaaSpocalypse is no longer a stock market story. It's a liquidity story.

Anthropic published one of the largest surveys of AI users ever conducted, interviewing 80,000 Claude users across 159 countries. The finding that caught my eye: hallucinations, not job loss, are what people fear most. BlackRock's Larry Fink warned in his annual letter that AI risks widening the wealth divide. HSBC appointed its first ever chief AI officer. And more.

But first, my take on a concept that I think will define the next 12 months of AI in finance. It's called a context graph. And most people haven't heard of it yet.

In This Week’s Issue:

From The Trenches:
  • The context graph era

News Digest:
  • Apollo caps redemptions as private credit exodus deepens

  • Hallucinations haunt AI users more than job losses

Other Interesting Things I’ve Read or Seen this Week:
  • Khosla proposes AI tax overhaul, Fink warns AI widens wealth divide, HSBC's first chief AI officer, Sanders and AOC target data centres, Dallas Fed on who AI actually displaces

From The Trenches

The Context Graph Era

Two calls this week, different teams, surfaced the same issue. Both had connected their systems to Claude via MCP. Both found that the results were lacklustre.

Not because the model was bad. Because every question required so much manual context that they might as well have done it themselves.

But wait. Didn't I write just a few weeks ago that APIs and MCPs were the future? They are. But there are important nuances as to why some connectors work well and some don't. It all boils down to whether the underlying systems actually understand your data. In both these cases, the answer was no. No related context. No awareness of how one number or tag or entry related to another. And hence the substandard results.

New Terminology is Coming

You're going to start hearing "context graph" a lot over the next 12 months. See also "knowledge graph." Also "ontology." Three terms, same core idea.

Think about how deal information actually works. You have a CIM. That CIM describes a company. That company has financials across multiple periods. Those financials sit in Excel workbooks. You've written IC memos about the opportunity. You've looked at comps in the same sector. You've evaluated similar businesses before.

All of that information is connected. But right now, it lives in completely separate systems. Your CRM knows about the deal. Box has the files. Email has the context from management calls. Your associates' heads contain the relationships between all of it. None of those systems talk to each other. And none of them understand how a revenue figure in one workbook relates to a projection in another.

A context graph is what connects all of that. It's a structured layer that maps every data point to every other related data point. Every CIM linked to the company it describes. Every set of financials linked to the deal. Every IC memo connected to the sector, the comps, the previous opportunities you've evaluated. It doesn't exist today because the software we use was never built to think this way. CRMs were built to track relationships. File storage was built to store files. Email was built to send messages. Each system was designed for one job, with no awareness of the others.

Beyond the Wow Factor

Anthropic published the results of an 80,000-person survey of Claude users on March 21. The biggest concern wasn't job loss. It was hallucinations. 27% of respondents said AI mistakes were their primary worry. Not being replaced. Being misled.

That makes sense to me. The initial excitement of dropping a document into Claude and getting a polished summary or output will start to fade and the gaps will become more visible. Chat gives you a confident answer with no way to trace where it came from. Every conversation starts from zero. No (true) memory. No compounding knowledge. No institutional context.

One fund I spoke with this week described their data situation in two words: it's nowhere. Four different systems. Their CRM, SharePoint, email, internal repositories. Every time they want to answer a basic question about a portfolio company, someone picks up the phone.

And the honest question I keep hearing: has anything actually changed? Firms get Claude. The PowerPoints look great. Individual productivity ticks up. But at the organisational level? Same number of deals. Same pace. Same workflows. The tool arrived. The transformation didn't.

At some point, we're going to move beyond the wow factor of chat and single-tenant conversations. At some point, this needs to ascend to the firm level. That's where context graphs come in.

What a Context Graph Actually Does

Put simply, a context graph builds a layer of understanding between your raw files and the AI. Instead of an agent reading entire documents and hoping it finds what's relevant, it queries an index. It pulls in only the specific data it needs, already tagged with what it is, where it came from, and how it relates to everything else.

In practice. You ask: "What was the average fleet age for Company X last year?" Without a context graph, the agent has to trawl through workbooks, guess which sheet has the right table, and hope it interprets the layout correctly. With one, it knows immediately where that data lives. It pulls one table. Done.

That might be fine for a one-off answer. But what happens when you need to do this across a whole host of files? Suddenly the context window, the amount of data the AI can reason through at once, fills up. And without structure, more data just means more noise.

Why More Tokens Don't Fix This

The instinct is to think bigger context windows will solve the problem. They won't. For two reasons.

First, files are much bigger than people realise. A single Excel workbook with a few tabs can easily consume over a million tokens. That's the entire context window of the largest models available today. One file. For one company. Start asking a portfolio-level question across five deals and the agent runs out of room before it's even started.

Second, even if you could fit everything in, the agent still doesn't understand what any of it means relative to everything else. You're just stacking more disconnected fragments on top of each other. More hay, same needle.

"Bigger context windows don't solve this. You're just stacking more disconnected fragments on top of each other. More hay, same needle."

Popular general-purpose enterprise AI search tools today hit the same wall. Fine for simple lookups. Finding a Slack message, digging up an email. But the moment a question requires combining data across documents, relating one source to another, doing actual analytical work, they fall apart. The agent brings in whatever it finds with no way to manage what's relevant. No relationships. No schema. No structure.

What Does This All Mean?

The challenge right now is that everyone's hooked up to their different systems, but those systems just aren't set up to work in an agent-native way. The data architecture was designed for humans clicking through screens, not for agents reasoning across structured information.

Here's the analogy I keep coming back to. The early web likened everything into print metaphors. Newspapers became websites that looked like newspapers. Catalogues became websites that looked like catalogues. It took years before anyone built something that could only exist on the internet.

We're doing the same thing with AI. We've become so accustomed to having our information scattered across different places, because that's just how old software worked. Your CRM. Your Box. Your email. But if you think about it, this should all be in one central place. Organised, connected, and linked. That kind of coordination currently sits in people's heads. It's what makes your best associate valuable. They remember the deal from two years ago that's relevant to the one you're looking at now. They know which version of the model has the right assumptions.

That institutional knowledge can now be codified. That's what a context graph does. And that's why it matters more than which model you're running.

I don't think we're heading towards a world where everything is just an agent platform with no human interface. Software will be designed primarily for agent interactions, yes. But there will still be persistent dashboards and UIs for the humans who are directing, reviewing, and approving. Think of it as an AI execution layer doing the work, with a human monitoring console on top. Exception handling. Supervision. Override when needed.

The firms that build this layer now will compound every piece of knowledge they capture. Every deal, every set of financials, every IC note becomes part of a growing institutional memory that any agent can draw on.

The firms that wait will keep starting from zero. Oh, and yes, this is basically what we're building at DealSage. So if you're interested or starting to run into these issues, let us know.

News Digest

Apollo Caps Redemptions as Private Credit Exodus Deepens

Apollo capped investor withdrawals from its flagship private credit vehicle this week, becoming the latest major fund manager to gate redemptions as AI-driven disruption fears cascade through private credit portfolios.

Investors sought to pull roughly $1.6 billion from Apollo Debt Solutions, its $25 billion business development company. That's 11.2% of net assets, more than double the quarterly redemption cap. Apollo honoured just under half.

The details:

  • Apollo Debt Solutions: $1.6bn in Q1 redemption requests against a 5% quarterly cap. ~$730mn returned

  • Ares Strategic Income Fund ($10.7bn): 11.6% redemption requests the same week, also gated at 5%

  • Blackstone Private Credit Fund: first monthly loss in 3+ years in February. Stock hit 52-week low

  • Total private credit market now exceeds $2 trillion. US default rate at 5.8% per Fitch, highest in years

  • SaaS/software exposure across private credit portfolios estimated at 20-30%

Why it matters: Multiple major platforms are gating simultaneously. What started as a public equity selloff has migrated into private credit, and now it's becoming a liquidity event.

My take: Macquarie's CEO put it plainly this week: redemption requests are being driven by the "SaaSpocalypse." She was quick to add that there isn't a credit problem. I'm not sure that distinction holds for long. When default rates are at multi-year highs and investors are queuing to exit, the line between sentiment and fundamentals gets very thin. Apollo positioned itself early, slashing software exposure. The question is whether the firms that didn't are sitting on markdowns they haven't yet taken.

Hallucinations Haunt AI Users More Than Job Losses

Anthropic published one of the largest surveys of AI users ever conducted on March 21, interviewing over 80,000 Claude users across 159 countries in 70 languages. The biggest finding: 27% said AI mistakes were their primary concern. Only 22% cited job displacement.

The conversations were conducted using Claude itself as the interviewer, allowing Anthropic to run qualitative research at a scale that would be impossible with human researchers. Deep Ganguli, who leads Anthropic's societal impacts team, said the goal was to "collect this rich human experience using Claude, so it could really inform our research agenda."

The regional differences were stark. South America, Africa, and south-east Asia viewed AI with significantly more optimism than Europe, the US, or east Asia. The divide was clean: wealthier countries were more negative, lower-income countries more hopeful. The explanation may be simple: if AI hasn't visibly entered your daily work yet, displacement feels abstract.

The details:

  • 80,000+ respondents across 159 countries, 70 languages

  • 27% most concerned about hallucinations. 22% about job displacement. 16% about impact on critical thinking

  • 32% said AI had made them more productive at work

  • 19% said AI had fallen short of expectations

  • Existential risk from AI ranked bottom of the list of concerns

  • Regional split: lower-income countries more optimistic, wealthier countries more sceptical

Why it matters: The hallucination finding validates exactly what I wrote about above. People aren't worried about being replaced. They're worried about being confidently misled. That's a trust problem, not a capability problem.

My take: Two things jumped out beyond the headline number. First, existential risk from AI ranked dead last among concerns. So that's just me then? Second, 16% cited impact on critical thinking, which is higher than I expected. Cognitive atrophy is a real risk. As someone who uses AI heavily, it's something I try to be very conscious of. The more you outsource your thinking, the harder it becomes to spot when the machine gets it wrong. And as the hallucination data shows, it gets it wrong more than people are comfortable with.

Other Interesting Things I’ve Read of Seen This Week:

Anthropic considers IPO as soon as October (Mar 27) - Could raise more than $60 billion. Goldman, JPMorgan, and Morgan Stanley jockeying for lead roles. Racing OpenAI to list first. When both frontier labs are IPO-ready in the same year, the AI market is no longer early stage. It's institutional.

Blackstone and Apollo brush off private credit fears (Mar 26) - Blackstone co-CIO Kenneth Caplan at the Melbourne symposium: "There is a big disconnect between the headlines and what we see in the portfolio." Very low defaults, he says. Meanwhile, Blackstone's flagship credit fund just posted its first monthly loss in three years. One of those things is a headline.

OpenAI investor Vinod Khosla proposes eliminating income tax for Americans earning under $100K (Mar 28) - Fund it by taxing capital gains at the same rate as income. His logic: AI is accelerating the shift of wealth from labour to capital. Job displacement will be "the single biggest issue" in the 2028 election. When one of OpenAI's earliest investors is proposing wealth redistribution, the Overton window has moved.

BlackRock's Fink warns AI boom could widen wealth divide (Mar 23) - Annual letter from the CEO of the world's largest asset manager ($14 trillion AUM): "AI threatens to repeat that pattern at an even larger scale." Both Fink and Khosla landing on the same theme in the same week is not a coincidence. The political economy of AI is becoming impossible to ignore.

HSBC appoints first chief AI officer (Mar 23) - David Rice, previously COO of Corporate and Institutional Banking, starts April 1. Follows the 20,000 potential job cuts reported two weeks ago. HSBC isn't just talking about AI transformation. They're building an org chart around it.

Sanders and AOC unveil data centre moratorium bill (Mar 25) - Legislation to pause all new AI data centre construction nationwide until federal safeguards are in place. With hyperscalers planning $660 billion in capex this year and communities pushing back on power consumption, this was inevitable. Whether it passes is one question. Whether it introduces planning risk into every data centre deal is another.

Acquisition Intelligence is a weekly newsletter on AI in M&A for finance professionals, private equity investors, investment bankers, corp dev teams, and deal-makers.

For questions, feedback, or to share what you're seeing in the market, reply to this email.

P.S. I'm Harry, co-founder of DealSage. We're building an AI-native deal intelligence platform to help professionals turn their institutional knowledge into better decisions. If you're curious what we're up to, check out dealsage.io or just reply here

Keep Reading