Writing

Notes on everything and anything.

AI pricing — struggling for PMF or reaching maturity?

The number of AI pricing vendors that raised a Series A in 2023–2024 on the promise of "dynamic, intelligent pricing" is striking. What's equally striking is how few of them have broken through to become category-defining products two years later.

Two competing explanations are in play.

The first is that these tools are still searching for product-market fit. The core promise — real-time price optimisation across large SKU catalogs, grounded in demand signals and competitive data — is compelling on paper. But the implementation reality is brutal. Pricing decisions touch finance, sales, legal, and sometimes the board. The sales cycle is long, the integration is heavy, and the change management required to shift a company from cost-plus to AI-driven pricing is often underestimated by a factor of three.

The second explanation is more interesting: that these tools have found PMF, but in a narrower segment than originally pitched. The wins are concentrated. Large e-commerce players, airlines, and a handful of SaaS companies with clean data infrastructure are getting genuine value. Everyone else is still in pilot purgatory.

My read: it's maturity within a narrow band, not category-wide PMF. The vendors that survive the next 18 months will be those who stopped trying to sell "AI pricing" to CFOs and started selling specific, measurable outcomes to specific buyers — revenue per seat, yield on tail SKUs, quote-to-close rate. The category is real. The packaging is still wrong for most buyers.

growth

AI for sales productivity — is it real or for the reel?

Every sales kickoff deck in 2025 had a slide about AI. Call summarisation, next-best-action prompts, automated outreach sequences, pipeline health scoring. The tools are real, the demos are polished, and the case studies on vendor websites are compelling.

But when I ask revenue leaders what they've actually seen move — not in the deck, in the CRM — the answers get quieter.

The honest picture is more nuanced than either the hype or the backlash.

What's genuinely working: Admin compression. Reps spending 30–40% of their time on non-selling activities — logging calls, updating fields, writing follow-up emails — can claw back a meaningful portion of that. That's real. A rep who books six more selling hours a week is a different rep.

What's not working yet: The harder part of selling. Knowing which accounts to prioritise, how to reframe value for a CFO who's just cut budget, how to handle the objection that's never in the playbook. AI systems trained on historical win/loss data encode past patterns. They are less useful at the inflection points where markets or buyer psychology shift — exactly when a rep most needs help.

The reel problem: Vendors optimise their case studies for the best-case account. The rep who was already top quartile, with clean data, in a product category where AI prompts translate well. The median rep, with fragmented notes and a complex sale, looks different.

My view: AI for sales productivity is real, but the productivity gain is concentrated in the bottom half of the rep population on administrative tasks. The promise of AI-driven revenue lift at the top of the funnel remains ahead of the evidence. Worth investing in — with clear-eyed expectations about where the value actually lands.

growth

2026 — the year of churn?

A pattern is showing up across the software portfolios I've been close to this year: renewal conversations that would have been routine in 2023 are now genuinely contested.

The mechanism is straightforward. Companies that signed 3-year enterprise contracts during the 2021–2022 expansion are now hitting renewal. CFOs who weren't in the room for the original purchase are now in the room. Procurement teams that didn't exist two years ago are now running RFPs. And AI alternatives — or the credible threat of them — are giving buyers negotiating leverage they didn't have before.

Three dynamics are converging:

Budget rationalisation: The median enterprise software stack has more redundancy than it did three years ago. Two tools that do roughly the same thing, bought by different teams at different times. Renewal is the forcing function to consolidate.

AI substitution risk: It doesn't matter whether the AI alternative is actually better yet. If a CFO believes it might be good enough in 12 months, they have a reason to push for a shorter contract term or a lower price today. Perceived substitutability is doing work even before actual substitution.

Outcome scrutiny: "We're getting value from this" was sufficient in the growth era. It isn't now. Revenue leaders are being asked to show what changed because of the tool — in pipeline, in close rate, in revenue per rep. Vendors who sold on potential are being asked to produce proof.

The companies best positioned for 2026 renewals are those who spent the last year building outcome measurement into their customer success motion. The ones who didn't are about to find out what procurement teams do when they smell blood.

growth

On Meetings

The most productive meetings I've had this year had no slides and no agenda. Just two people with enough context to skip the setup and go straight to the question that actually matters.

I've started treating a well-formed question as the real deliverable. If I can't write it down in one sentence before the meeting, the meeting isn't ready.

reflections

AI confidence theater

I've noticed a pattern in how people talk about their AI tools in professional settings. There's a particular register — confident, slightly evangelical, with just enough technical vocabulary to signal fluency — that has become almost obligatory.

Nobody says "I've been experimenting with this and I'm not sure if it's actually better." They say "we've integrated AI into our workflow" or "we're leveraging AI to drive efficiency." The framing is always one of mastery, never of confusion.

This is AI confidence theater. And I think it's doing real damage.

The tools are genuinely powerful and genuinely confusing at the same time. They hallucinate. They're inconsistent across sessions. They're excellent at some things and quietly terrible at others in ways that aren't always obvious until something goes wrong. The honest user experience is one of managed uncertainty — learning when to trust the output and when to check it, calibrating instincts that don't transfer cleanly from earlier software.

But that honest experience is hard to perform in public. Saying "I don't fully understand when to trust this" reads as a skills gap. So people perform confidence instead, and the actual learning — the slow, empirical work of figuring out where these tools are reliable — happens privately, if at all.

The cost is that organisations adopt AI with a false picture of how it's actually being used. Leaders think adoption is deeper than it is. Sceptics think confidence reflects genuine mastery. And the people who are genuinely experimenting and learning quietly don't get the credit or the audience.

The most useful thing a senior person can do right now is be honest about what they don't know. There's more permission in the room for uncertainty than most people realise.

reflections

The AI Adoption Plateau

Every AI adoption curve I've seen in industrial businesses follows the same shape: excitement, then a six-month plateau where nothing works as expected, then a quiet inflection where one team figures out a narrow use case that actually fits, and then slow expansion from there.

The plateau is where most companies give up. The companies that don't are the ones that narrow the problem rather than broaden it. The winning first use case is always smaller and more specific than the initial ambition.

tech & ai

On Patience

Something I keep coming back to: the decisions that have compounded the most for me were ones where I was patient with the right thing and impatient with the wrong thing.

Patient with relationships. Impatient with ambiguity in deals. Patient with the time it takes to understand an industry. Impatient with processes that aren't moving toward a real answer.

Getting those two mixed up is expensive.

reflectionsinvesting