Jul 3, 2025

How a Private Equity Firm Avoided a $2M AI Mistake: The Due Diligence Playbook That Actually Works (Melania Calinescu)

How a Private Equity Firm Avoided a $2M AI Mistake: The Due Diligence Playbook That Actually Works

Why "autonomous" AI often isn't—and how to uncover what's really happening behind the flashy demos

"We have autonomous coding that reads medical codes without human supervision."

That's what the healthcare technology startup told the private equity firm. The demo looked impressive. The market opportunity was huge. Everything seemed perfect for a significant investment.

Then Melania Calinescu, PhD mathematician and founder of AI4ALL Solutions, started asking questions.

Within two weeks, she'd uncovered that the "autonomous" system actually required human review for every complex case—and one engineer determined what qualified as "complex." If he went on vacation, the system essentially stopped working.

The result? The PE firm still invested, but at a significantly reduced valuation after understanding the real technical capabilities and hidden risks.

"Show me the proof," Melania says. "Very often they get to see the flashy demo—something that works in a very well-defined environment. When you take that and put it into real-life production ready scale, it very often falls apart."

Here's her complete due diligence playbook for seeing past AI hype and uncovering what's actually happening.

The Problem: AI Marketing vs. AI Reality

Melania has seen this pattern countless times. Companies claim revolutionary AI capabilities, but when you dig deeper, you find:

  • "Autonomous" systems that require constant human intervention
  • "AI-driven" revenue that's actually generated by traditional processes
  • Critical dependencies on individual employees that create massive operational risk
  • Demos that work perfectly in controlled environments but fail in real-world complexity

"There's so much hype," Melania explains. "Executives and investors are implementing AI tools right now or throwing money at everything that is AI first and just looking at flashy demos left and right and not really understanding what the technology is."

The healthcare case perfectly illustrates this disconnect. The company genuinely had AI technology, but the gap between their marketing claims and operational reality was enormous.

Melania's 4-Step AI Due Diligence Framework

Step 1: The Purpose Reality Check 🧭

Start with the fundamental question: What specific customer or business problem does this AI actually solve?

This sounds obvious, but Melania finds it's often overlooked. "Bringing AI into an organization and building it into a product or service should start from the question: what business problem are we solving with AI, not vice versa."

Key questions to ask:

  • What customer problem does this solve that couldn't be solved before?
  • How do customers actually use this AI feature?
  • What percentage of customers use the AI functionality vs. other features?

Red flags:

  • Vague answers about "efficiency" or "automation"
  • AI features that seem bolted onto existing products
  • Claims that don't connect to specific customer pain points

Step 2: The Revenue Reality Check 💰

Critical insight: Determine how much revenue genuinely comes from AI-driven automation vs. manual human processes.

This is where Melania uncovered the healthcare company's real situation. They had steady revenue, but most of it came from traditional services, not the "autonomous" AI they were promoting.

Melania's approach: "How much of the revenue that the company was bringing in, how much of that was created directly by AI versus just the overall product?"

Questions to investigate:

  • What percentage of revenue is directly attributable to AI features?
  • How would revenue change if the AI component was removed?
  • Are clients paying specifically for AI capabilities or overall service?

In the healthcare case: The company had reliable clients using their system, but the AI component wasn't driving the business value they claimed. The real moat was their proprietary dataset, not their AI algorithms.

Step 3: Proof Over Promises 🔍

Melania's rule: Go beyond slick presentations. Require real-world demonstrations of automation at scale.

"Without that intention, it's just sort of like a crystal ball," she warns. "You're like, well, I think... I hope the benefits will come."

How to get proof:

  • Request demonstrations using your actual data or use cases
  • Ask for references from current customers using the AI features
  • Require metrics from production environments, not controlled demos

The healthcare investigation: Initially, the company claimed their feedback loop was "randomly done, like regular quality control." But when Melania spoke directly to their AI engineer, she discovered every complex case required human review—and the definition of "complex" was determined by one person.

Questions that reveal truth:

  • Can you show me this working with messy, real-world data?
  • What happens when your AI encounters edge cases it hasn't seen?
  • How often do humans need to intervene in the "automated" process?

Step 4: Single-Point Risk Assessment ⚠️

Critical discovery: Check for hidden dependencies that could severely disrupt operations.

This is where Melania found the healthcare company's biggest vulnerability: one engineer controlled the algorithm that determined which cases needed human review.

"If he went on holiday, the system didn't necessarily work, which was a huge risk because one person dependency when you claim full autonomous system—they are not equal."

Dependencies to investigate:

  • Key personnel whose absence would disrupt AI operations
  • Critical data sources that could be lost or restricted
  • Third-party APIs or services the AI relies on
  • Custom code or models that only certain people understand

Questions to ask:

  • What happens if your lead AI engineer leaves?
  • How many people can modify or troubleshoot the AI system?
  • What external dependencies could break your AI functionality?
  • Do you have documented processes for AI system maintenance?

Real-World Application: The Healthcare Case Study

Here's exactly how Melania's framework uncovered the truth:

The Claim: "Autonomous coding reading" that automatically interprets medical codes without human supervision.

The Investigation:

  1. Purpose Check: ✅ Clear customer problem (medical coding errors are expensive and common)
  2. Revenue Check: ⚠️ Revenue came from overall platform, not specifically AI features
  3. Proof Check: ❌ Complex cases still required full human intervention
  4. Risk Check: ❌ One engineer controlled the entire automation decision algorithm

The Reality: The AI handled simple, straightforward cases automatically. But complex procedures—which would benefit most from automation—still required human coding. The system was partially automated, not autonomous.

The Outcome: Instead of walking away completely, the PE firm negotiated a significantly lower valuation that reflected the actual technical capabilities rather than the marketing promises.

Why This Approach Works for Any AI Investment

Melania's framework isn't just for private equity firms. It works for any organization evaluating AI solutions because it focuses on operational reality rather than marketing claims.

For SMBs considering AI tools:

  • Use the same questions to evaluate vendor claims
  • Demand proof with your specific use cases
  • Understand exactly what parts require human intervention

For enterprises doing M&A:

  • Apply this framework to technical due diligence
  • Focus on dependencies that could disrupt operations
  • Separate AI-driven value from traditional business value

For anyone investing in AI projects:

  • Map claimed benefits to actual business outcomes
  • Identify single points of failure before they become problems
  • Require measurable proof, not just impressive demos

The Most Important Question to Ask

If you remember nothing else from Melania's playbook, remember this: "Show me the proof."

"Being able to get to that level of let's do a trial, let's see how this works for my particular case," Melania emphasizes. "Because another thing that a lot of AI tools are doing today, they try to sell this one size fits all type of tools, which is hard to really function easily because the data is very different."

Don't accept generic demos. Don't trust controlled environments. Demand proof that the AI solution works with your specific data, your specific complexity, and your specific constraints.

The Bottom Line

AI due diligence isn't about being skeptical of AI—it's about being realistic about capabilities. The healthcare company Melania investigated actually had valuable technology and a solid business. But understanding the gap between claims and reality allowed for appropriate valuation and realistic expectations.

As Melania puts it: "Fundamentally, it's just a collection of very sophisticated mathematical models." The magic isn't in the AI itself—it's in applying it appropriately to solve real business problems.

Stop accepting flashy demos at face value. Start demanding proof that connects AI capabilities to actual business outcomes. Your investment decisions—and your business results—will be dramatically better.

Want more practical AI strategy insights? This is part of our series on real-world AI implementation lessons from enterprise leaders.

Stuart Willson is the founder of Just Curious, a platform dedicated to helping SMB leaders practically adopt AI to enhance growth, margins, and efficiency.

Stuart Willson
Ratings
stuart-willson

Founder @ Just Curious

Los Angeles, CA

Founder of Just Curious, a platform dedicated to helping SMB leaders practically adopt AI to enhance growth, margins, and efficiency.

No items found.
Just Curious AI – Ask Us Anything