I’ve sat through a lot of deal reviews over the years. Most of them follow the same pattern. Someone walks through a deal, gives their perspective on where things stand, and outlines what they think the next step is. On the surface, it sounds fine, but underneath it, you’re usually getting a filtered version of reality.
Not intentionally. It’s just human nature. We all tend to tell the story we want to believe about a deal, especially when we’ve invested time in it. What’s usually missing is a clean, objective look at what’s actually happening. That’s what pushed me to start experimenting with AI in my own deal reviews.
I didn’t go into this thinking I was going to reinvent anything. I just wanted a way to step outside the narrative and look at deals more clearly without spending a ton of extra time doing it. What I found pretty quickly is that AI is useful in a very specific way. It doesn’t replace judgment, and it doesn’t close deals, but it’s very good at forcing structure and asking better questions than most people naturally do in the moment. That alone changes the quality of the conversation.
The way I use it is pretty simple. I take whatever I have on a deal. Notes, emails, sometimes call transcripts, sometimes just a written summary. Nothing polished, just the raw material. From there, I’m not asking AI to tell me whether the deal is good or bad. I’m using it to break the deal down in a way that’s harder to do in your head when you’re moving fast.
The first thing I care about is who’s actually involved. In a lot of deals, especially enterprise, there’s a difference between who you’re talking to and who’s actually making the decision. That gap is where things fall apart. AI is surprisingly good at surfacing that. It forces you to look at whether you really have coverage or if you’re relying too heavily on one person.
From there, I’m looking at risk. Not in a vague way, but in a very direct way. What’s missing? What assumptions are being made? Where could this stall? When you run a deal through that lens, patterns start to show up. Weak access to decision makers. A business case that isn’t fully formed. Stakeholders that aren’t aligned. Things that are easy to gloss over when you’re just trying to keep momentum.
The part that’s been the most useful for me is using it to pressure test thinking. It’s easy to get locked into your own view of a deal, especially if it feels like it’s moving in the right direction. Having something push back on that, even imperfectly, is valuable. It’s not that AI is always right. It’s that it doesn’t get attached to the deal the way we do. It will call out things that a rep might not bring up or that a manager might miss if they’re trying to move quickly.
Where this really shows up is in the next step. A lot of deals don’t stall because of some major issue. They stall because the next step isn’t clear enough or strong enough to move the deal forward. “Follow up” isn’t a next step, and neither is “check in.” What I’m looking for is something specific. Who is doing what, with whom, and what needs to happen for that step to be considered successful. Running that through AI forces a level of clarity that’s easy to skip otherwise.
The net effect is pretty straightforward. The deals get cleaner. The conversations get more honest. And you spend less time circling around the same issues week after week.
I don’t think AI is going to replace salespeople. If anything, it’s going to make the gap between good and average a lot more obvious. The people who already think in a structured way will get faster and more effective. The ones who rely on instinct without structure will start to get exposed. For me, this is just a tool to make sure I’m staying on the right side of that.
If you’re experimenting with this kind of thing, I’d be interested to hear how you’re approaching it. Everyone seems to be figuring it out in their own way right now.