How Investors Are Rethinking AI Startups Diligence

Investment and diligence concept
The AI revolution is moving so much faster than previous technological shifts. While the mobile internet took nearly a decade to reach 90 percent household adoption, ChatGPT achieved the same user penetration in just two years. This accelerated cycle is creating companies that reach incredible scale in record time, but it’s also rewriting the venture capital playbook. The traditional rules of SaaS investing are being challenged, and the moats we once relied on are becoming less defensible. Based on recent discussions my Eight Roads Ventures colleague, Michael Treskow, and I have had with our team, here are ten ways how investors are rethinking their AI startups diligence today.
1. Agents Are the Future — Not Just Co-pilots
The first wave of AI applications was dominated by “co-pilots” — tools that assist humans. The next, more powerful wave is characterized by “agents” — autonomous systems that complete tasks from beginning to end. These agents are transforming traditional “systems of record” into “systems of action.” As an investor, the key question goes beyond the earlier paradigm of “does this make a workflow more efficient?” Now, investors must ask, “can this automate the workflow entirely?” How (and to what degree) humans are involved will depend on the AI-use-case fit, enterprise risk appetite, and the existing workflow. As an example, Roo Code has multiple modes, from code mode to architect mode, based on customers’ specific needs. Early breakouts are already emerging in specialized fields like cybersecurity (penetration testing agents), DevOps (debugging agents), and financial services (memo generation agents), showing the power of vertical agents.
2. Traditional SaaS Moats Are Diminishing
The three defensive moats that defined the SaaS era are eroding:
Implementation Friction: In the past, the high cost and complexity of implementing enterprise software, especially in regulated industries, created stickiness. Today, AI agents can write code and automate implementation, drastically lowering switching costs.
Workflow stickiness: SaaS used to be the system of record, deeply embedded in the enterprise workflow. But now that agents are performing the workflow entirely, it could reduce the friction of migrating.
Data Gravity: The effort of migrating data from one system to another created a powerful lock-in. Now, AI models can automatically ingest and structure data from various sources (including emails, calendars, and documents) making it far easier to populate a new system, and thereby reducing the stickiness of the incumbent.
3. Enterprise Knowledge, Trust, and Observability Are the New Defensibility
With the underlying models increasingly turning into an API-accessible commodity, differentiation is shifting up the stack to the application layer. The most defensible companies are building new moats around enterprise knowledge, trust, and observability.
When considering workflow integration, investors must figure out how deeply the product is embedded within a customer’s core business processes, or how well the agents internalize the enterprise knowledge if there is forward-deployed engineering. Just like a service provider, the more an agent has absorbed the enterprise’s organizational and operational intricacies and preferences, the harder it is to replace. The second moat, centered on becoming a trusted, default partner, is related to an older sales and marketing principle: In a confusing market, enterprises are looking for a trusted guide to shape their AI strategy. The first vendor to gain a customer’s trust and become their “default” AI partner gains an immense advantage, with the ability to expand across the organization.
4. Product-Market Fit Is More Transient Than Ever
The low barrier to entry means that for any given problem, a dozen well-funded players can emerge almost overnight. This has made product-market fit (PMF) a potentially transient advantage. A company might find a temporary fit and grow to a few million in ARR, only to be outflanked by a competitor with a new feature or a slight improvement in the model. As an investor, you must constantly ask: is this PMF durable?
5. The “Incumbents Are Slow” Argument Is No Longer a Given
Two ideas — that incumbents will be slow to act and that customers building in-house solutions will fail — that once formed foundational pillars of venture investing have now been turned on their head.
Incumbents now have access to the same powerful APIs as startups. And while cultural inertia at enterprises remains a challenge, the technical barrier to entry has been lowered, and the proprietary data they have accumulated over the years will give them a head start. Similarly, with modern orchestration tools like Thread, Onyx, or n8n, it’s becoming more feasible for customers to build their own bespoke AI agents in-house. A startup’s competition is no longer just other startups, but also its own customers and the very incumbents it aims to disrupt.
6. TAM May Increase, but Advantages Become Less Obvious Once Pricing Normalizes
A critical shift in the AI era is the expansion of the total addressable market (TAM) beyond traditional software budgets. AI companies can now tap into two distinct enterprise spending pools. “Co-pilot” models, which assist human users, are typically sold on a per-seat basis and compete for existing software budgets. Autonomous “agent” models complete workflows end-to-end, are sold on a per-outcome basis, and hold more transformative potential.
AI agents are positioned to capture a share of the much larger services budget, effectively replacing costs previously allocated to human labor or outsourced services. However, while the opportunity to capture the services budget is immense, it is not a blank check. As some founders have noted, many are generating eight-figure savings while charging customers six-figure prices. As agent-based solutions become more common, the price for automated labor will inevitably face downward pressure and normalize, meaning the initial advantage of charging rates comparable to human labor may not be sustainable long-term.
7. Team Composition Looks Different at AI Companies
AI-native companies are operating with unprecedented efficiency. While a company like Cursor can have great PLG motion and reach $100M ARR with around 30 employees, most enterprise AI companies build a GTM team to reach scale. In a confusing market with intense competition where perceived product differentiation is limited, GTM makes all the difference. On the tech side, CTOs with an ML background will be more essential in the foundational model and middleware layer than in the application layer. Having a Head of AI to stay on top of the latest feature releases and skate to the right opportunity will create a nice complement as the CTO scales the technical organization and infrastructure.
8. SaaS Metrics Still Matter, But in a Different Way
LTV/CAC is still relevant, but velocity matters more. The “Triple, Triple, Double, Double, Double” (T2D3) growth model for top-tier SaaS is being replaced by an even more aggressive trajectory. Some have suggested the new top-quartile metric is “Quintuple, Quadruple, Triple.” For example, a company would grow from $1M to $5M, $5M to $20M, and $20M to $60M over three years. While this velocity is exciting, it can also be misleading. Rapid adoption in a hot market doesn’t guarantee a large TAM or durable revenue. While there is no public benchmark for churn metrics for AI companies yet, we know some of enterprise AI companies’ net revenue retention (NRR) at month 12 is well above 100 percent to compensate for the logo churns — see Glean at 120 percent, Writer at 160 percent, and Jasper for enterprise at 163 percent.
9. Scrutinize Gross Margins And Unit Economics
AI companies often have high compute and model inference costs. While we see margins improve over time, investors must be vigilant about how costs are reported. As others have noted, companies may claim impressive gross margins even though a closer look at their P&L reveals millions in API calls and compute costs categorized under R&D. When re-categorized correctly, their margin was actually negative. Investors must always dig into the P&L to understand the true cost of goods sold.
10. Customer Love Doesn’t Guarantee Retention. Product Usage Is The True PMF
In a normal market, a high net promoter score (NPS) is a strong signal of future retention, but not necessarily in the current AI landscape. Customers may unanimously love a product today, but the market is evolving so quickly that a better alternative may appear in six months. Many enterprises are intentionally building flexibility into their tech stacks to easily swap vendors, so founders and investors alike should beware of “vibe revenue.” Therefore, look beyond NPS to metrics like product usage, which is a leading indicator of retention. Beware of “stealth churn,” where customers who are still paying see less frequent usage, or use a product for a lower percentage of their entire workflow.