The RevOps
Knowledge Map
I went looking for a serious RevOps study guide — something built for senior operators, not people googling "what is RevOps" for the first time. I found certification courses that teach you to click buttons in HubSpot. I found thought leadership that explains frameworks without telling you what to do with them. I found career guides aimed at people considering the function.
What I did not find was a single document that says: here is every domain a RevOps leader should be able to speak to, honestly scored by what actually matters, with clear pass/fail gates so you know when you are done.
So I built it.
This is not RevOps 101. It assumes you have been doing the work — building pipelines, deploying systems, fighting with CRM data, explaining to a CRO why the forecast is wrong. The gap it closes is not knowledge. It is the gap between doing the work and being able to articulate it under pressure — in an interview, in a board meeting, in the first 30 days at a new company.
Most RevOps operators have done more than they can describe at the right altitude. They lead with the tools they used instead of the problems they solved. They say "we" when they mean "I." They can build the health scoring framework but stumble when asked to explain the Magic Number.
This guide fixes that.
How to Use This Document
Priority Scoring
The Altitude Rule
The knowledge is the same at every level. What changes is the register. A manager explains how they calculated LTV:CAC. A VP explains what they did about it when the number was wrong. A Director describes the forecast process they built. A VP describes why they chose that approach over alternatives and what business outcome it drove. This guide teaches VP-level framing throughout. Lead with the business problem and the decision, not the build and the methodology.
CORE Blocks
Blocks marked P1 are the ones that decide whether you belong in the room. If you have limited time, start there.
Block Overview
Click any row to jump to that block.
| # | Block | Category | Priority | Hrs |
|---|---|---|---|---|
| 1 | SaaS Unit Economics | RevOps | P1 | 3–4 |
| 2 | Revenue Architecture & GTM | RevOps | P1 | 4–5 |
| 3 | Forecasting & Pipeline | RevOps | P1 | 3–4 |
| 4 | Customer Revenue Ops | RevOps | P1 | 3–4 |
| 5 | Revenue Infrastructure | RevOps | P2 | 3–4 |
| 6 | Demand Gen & Marketing Ops | RevOps | P2 | 3–4 |
| 7 | Sales Comp & Org Design | RevOps | P2 | 2–3 |
| 8 | Deal Desk & Quote-to-Cash | RevOps | P2 | 2–3 |
| 9 | Stakeholder Navigation | RevOps | P1 | 2–3 |
| 10 | AI-Powered Revenue Systems | AI | P1 | 6–8 |
| 11 | Story Bank | Performance | P1 | 3–4 |
| 12 | Opinion & Question Bank | Performance | P1 | 2–3 |
| 13 | Mock Interviews | Performance | P1 | 4 |
Total: ~42–51 hours. Priority 1 blocks alone: ~30–36 hours.
The 80/20 — If You Have 20 Hours, Not 45
Six blocks cover roughly 80% of what a CRO will probe. If time is short, these decide whether you get an offer:
| Block | Why It Makes the Cut | Hrs |
|---|---|---|
| 1 — Unit Economics | Table stakes. Fail here and the interview is over. | 3-4 |
| 2 — Revenue Architecture | Shared language with the CRO. Without this you cannot have the conversation. | 4-5 |
| 3 — Forecasting & Pipeline | The operational heartbeat. Every interview asks this. | 3-4 |
| 4 — Customer Revenue Ops | The differentiator. Most candidates cannot tell this story well. | 2-3 |
| 9 — Stakeholder Navigation | The VP vs. Director separator. | 2 |
| 10 — AI Systems | The edge. Most candidates have read about it but not shipped it. | 4-5 |
~18-23 hours for the knowledge that matters most.
- Block 5 (Revenue Infrastructure) — Most operators already live this. One framing pass at VP altitude.
- Block 6 (Demand Gen) — Real gap for many. Learn enough to not go blank.
- Block 7 (Sales Comp) — Know the key numbers and have an honest answer. Thirty minutes.
- Block 8 (Deal Desk) — Appears on JDs. Know the framework and have a POV on where it belongs organizationally.
Performance blocks (11-13) are non-negotiable regardless of time.
SaaS Unit Economics
Highest embarrassment risk. VP-level interviewers assume this is baseline. If you hedge or guess on LTV:CAC or payback period, you signal you have not operated at this level. It does not matter how good your Salesforce architecture is if you cannot talk about the economics of the business it serves.
Resources
| Resource | Format | Time |
|---|---|---|
| David Skok — For Entrepreneurs blog (forentrepreneurs.com) | Blog — deep read | 2-3 hrs |
| Focus: SaaS Metrics 2.0, CAC, LTV, Payback Period, Churn | Reading | — |
| David Skok — Magic Number & Rule of 40 posts | Blog | 45 min |
| SaaStr Podcast — unit economics episodes (3-4 targeted) | Audio | 1-2 hrs |
Core Concepts
- LTV:CAC Ratio — Customer Lifetime Value divided by Customer Acquisition Cost. The fundamental efficiency metric. Healthy: 3:1 or higher. Below 1:1 means you are paying more to acquire customers than they are worth. Varies by stage — Series A tolerates lower ratios if growth is strong because the assumption is efficiency improves with scale.
- CAC Payback Period — How many months of gross margin it takes to recover the cost of acquiring a customer. Healthy: under 18 months for SMB, under 24 for mid-market. Longer payback = more cash required to grow. This is the metric CFOs care about most because it directly determines cash flow requirements.
- Magic Number — Net new ARR in a quarter divided by prior quarter sales and marketing spend. Measures GTM efficiency at the investment level. Above 0.75 = invest more in growth. Below 0.5 = fix efficiency before scaling. Between 0.5 and 0.75 = optimize while maintaining investment.
- Rule of 40 — Revenue growth rate + profit margin should exceed 40%. The benchmark for healthy SaaS at scale. A company growing at 50% with -15% margins scores 35 (below threshold). A company growing at 25% with 20% margins scores 45 (above threshold). Useful for comparing companies at different stages.
- Logo Churn vs. Revenue Churn — Logo churn counts customers lost. Revenue churn counts dollars lost. A company can lose 10 small logos but gain expansion revenue from retained accounts and show positive net revenue retention. Both matter — logo churn signals product-market fit issues, revenue churn signals economic risk. You need to track and explain both.
- GRR (Gross Revenue Retention) — Revenue retained from existing customers before expansion. Measures the health of the base without the flattering effect of upsells. Below 85% is a serious problem that no amount of new bookings can outrun.
- NRR (Net Revenue Retention) — Revenue retained including expansion. NRR above 100% means the installed base is growing without new logos. NRR above 120% is elite and changes the entire business model — the company grows even if new sales stopped tomorrow.
Must Be Able To
- Calculate and interpret LTV:CAC — state what a healthy ratio looks like at Series A vs. Series B
- Explain payback period in plain English to a CFO who is skeptical of your number
- Use Magic Number correctly — what it measures, what a healthy score looks like, why it matters for investment decisions
- Apply Rule of 40 to a real company scenario without hesitating
- Connect unit economics to RevOps decisions — why CAC efficiency affects territory design, headcount modeling, and pipeline targets
- Explain the difference between logo churn and revenue churn and why both matter
- Define GRR and NRR and explain why NRR above 100% is a fundamentally different business
Revenue Architecture & GTM Design
The shared vocabulary CROs use to talk about full-funnel revenue motion. Most have read Jacco van der Kooij or been through Winning by Design training. If you cannot speak this language, you cannot have the strategic conversation that gets you hired.
Resources
| Resource | Format | Time |
|---|---|---|
| Winning by Design — YouTube channel | Video | 2 hrs |
| Watch: ARR Waterfall, Bowtie Model, NRR framework, SPICED framework | Video | — |
| Blueprints for a SaaS Sales Organization — Jacco van der Kooij | Book — skim | 2-3 hrs |
| Read: Revenue architecture, coverage model, and segmentation sections only | Reading | — |
| Winning by Design blog — revenue architecture posts | Blog | 30 min |
Core Concepts
- The Bowtie Model — The full revenue lifecycle from first touch through expansion. Left side: awareness, education, selection (acquisition). Right side: onboarding, impact, growth (retention and expansion). RevOps owns the data, process, and measurement infrastructure across the entire bowtie, not just the left side. Most companies over-invest in acquisition infrastructure and under-invest in the right side — that is where revenue leaks.
- ARR Waterfall — The components of annual recurring revenue change: new ARR + expansion ARR - contraction ARR - churned ARR = net new ARR. Every board deck includes this. Know it cold. A company can book strong new ARR but show flat net new ARR if churn and contraction eat the gains.
- Coverage Model — How you assign accounts to reps. Named accounts (high-touch, expensive, best for enterprise ACV), territory-based (geographic or firmographic, scales better), round-robin (high velocity, works for SMB), hybrid (most common at mid-stage). The right model depends on ACV, sales cycle length, and GTM stage. Getting this wrong creates comp inequity and missed accounts.
- SPICED / MEDDIC — Deal qualification frameworks. SPICED: Situation, Pain, Impact, Critical event, Decision (Winning by Design). MEDDIC: Metrics, Economic Buyer, Decision criteria, Decision process, Identify pain, Champion. Both connect deal qualification rigor to pipeline integrity and forecast accuracy. Know at least one cold and explain how it enforces pipeline discipline.
- PLG vs. Sales-Led vs. Hybrid — Product-led growth (user acquires themselves, conversion happens in-product), sales-led (rep-driven from first touch), hybrid (PLG for initial adoption, sales for expansion/enterprise). The GTM motion fundamentally changes the RevOps data model, metrics, handoffs, and priorities. In PLG, RevOps tracks product usage signals and PQLs rather than MQLs. In hybrid, you need both motions running with different stage definitions. Early-stage companies are often figuring out which motion they are — expect this question.
- Market Segmentation — Standard B2B SaaS model: SMB (1-100 employees, high velocity, automation-first), Mid-Market (100-1000, multi-stakeholder, stage gates and forecast rigor), Enterprise (1000+, complex, long cycle, deal desk and legal workflow). Each segment requires different sales motions, coverage models, and RevOps infrastructure.
Must Be Able To
- Draw or describe the bowtie model and explain where RevOps owns accountability at each stage
- Define and distinguish GRR vs. NRR and explain why NRR above 100% is a different business
- Use ARR waterfall vocabulary fluently: new ARR, expansion, contraction, churn, net new ARR
- Explain coverage model tradeoffs — when to use named accounts vs. territory vs. hybrid — at different ARR stages
- Describe SPICED or MEDDIC and explain how a deal qualification framework connects to pipeline integrity
- Articulate how revenue architecture decisions change from Series A to Series C
- Explain how a PLG motion changes RevOps priorities, data model, and metrics versus sales-led
Forecasting & Pipeline Operations
The operational heartbeat of RevOps. You will be asked about this in every interview. If you have built forecast processes and pipeline tracking, the work here is vocabulary and framework articulation. If you have not, this is deep study.
Core Concepts
- Forecast Categories — Commit (>90% confidence, rep stakes reputation), Best Case (60-80%, needs things to go right), Upside (30-50%, possible but not probable), Pipeline (early stage, not yet qualified). Every CRO uses some version of this. Know how you enforce the definitions — with observable evidence, not rep optimism.
- Weighted vs. Unweighted Pipeline — Weighted applies probability by stage (e.g., 40% at demo, 70% at proposal). Unweighted shows raw totals. Weighted is theoretically better but only as good as your stage definitions. Most early-stage companies have garbage stage data, making weighted forecasts misleading. Know when to use each and why.
- Pipeline Coverage Ratio — Total qualified pipeline divided by quota target. 3x is the common benchmark, but the right ratio depends on win rate, ACV, and sales cycle. If your win rate is 15%, you need more than 3x. If it is 40%, you need less. The point is knowing how to calibrate, not memorizing 3x.
- Stage Gates and Exit Criteria — The specific, verifiable conditions that must be met before a deal moves to the next stage. Not rep judgment — observable evidence. Example: "Discovery complete" means the SPICED fields are populated and validated, not that the rep feels good about the call. Enforceable gates are the difference between a pipeline that means something and one that does not.
- Pipeline Movement Tracking — Monitoring how deals move, stall, and regress across stages over time. More diagnostic than a snapshot. Reveals where deals die, how long they sit in each stage, and whether pipeline is actually progressing or just aging. Built from CRM edit history — every stage change timestamped.
- Forecast Accuracy — Measured as variance between called forecast and actual bookings. Track over time to build credibility. Common target: within 10% variance consistently. The goal is not to be right every quarter — it is to get more accurate over time and to know why you were wrong when you were.
Pipeline Review Philosophy
Most pipeline reviews are theater — reps narrate deal status and managers nod. A useful pipeline review is a diagnostic session that surfaces risk, validates next steps, and updates the forecast with evidence, not optimism.
- Focus on deals that moved, stalled, or regressed — not the full pipeline
- Ask for evidence of progress, not rep confidence
- Use data (last activity date, stakeholder count, stage duration) to challenge narratives
- End every review with updated commit and risk flags, not just a status report
Must Be Able To
- Walk through your forecast methodology and explain why you chose it
- Explain how you build a forecast process when CRM data is unreliable
- Describe your pipeline stage gates and what makes them enforceable vs. aspirational
- Articulate what makes a pipeline review useful vs. performative
- Explain pipeline coverage ratio and why the standard 3x benchmark is context-dependent
- Describe how pipeline movement tracking works and what it reveals that snapshots miss
The Politics of Forecast Ownership
Forecast accuracy is as much a trust and power problem as a process problem. The methodology is the easy part. What actually determines whether your forecast means anything is whether people tell you the truth — and that depends on what happens when they do.
- Who owns the number — In most orgs, the CRO calls the forecast and RevOps supports it. In healthier orgs, RevOps produces an independent data-driven view that the CRO can accept, adjust, or override — with the variance tracked over time. The independence matters. A RevOps forecast that always matches the CRO's call is not a forecast; it is a ratification. Your credibility is built by being willing to show a number the CRO does not like, with the data to back it up.
- When your data contradicts the rep's narrative — This happens constantly. The rep says a deal is closing this quarter. Your pipeline movement data shows it has been in the same stage for 60 days with no activity. The temptation is to defer to the rep — they are closer to the deal. The discipline is to flag the risk with evidence and let the CRO decide. You are not calling the deal; you are surfacing what the data says and making the risk visible. That is different, and the distinction matters in how you communicate it.
- Building credibility over time — Forecast credibility is earned by being consistently directionally right and transparently wrong when you miss. Track your called forecast against actuals every quarter. When you are off, write down why — bad stage data, a deal that surprised, a rep who sandbagged. The audit trail builds trust faster than accuracy does, because it shows you understand what happened and are fixing the underlying cause.
- The sandbagging negotiation — Experienced reps manage their forecast call strategically. They commit low and beat it to look like heroes. The countermove is not to call them out — it is to build stage progression data over time so you can show the CRO the pattern: this rep's commit lands at 130% of their call every quarter, therefore their current commit of $400K is really $520K. You are not accusing anyone; you are calibrating the signal.
Customer Revenue Operations
The post-sale revenue architecture. Most RevOps leaders can talk about pipeline. Very few have built a customer-side signal architecture — health scoring, renewal operations, expansion triggers, and the VOC loop that connects them. This is where differentiation lives.
Core Concepts
- Customer Health Scoring — A composite score combining usage data, engagement signals, support ticket patterns, NPS/CSAT, and stakeholder engagement to predict renewal likelihood. The signal architecture matters more than the score itself — what inputs feed it, how often it updates, what actions it triggers. A health score that nobody acts on is a vanity metric.
- Renewal Process Design — The operational workflow from renewal identification through close. Includes: trigger timing (how far before renewal does the process start), ownership (CS, sales, or hybrid), data requirements (usage, health, contract terms), and escalation paths for at-risk renewals. Most renewal processes break at the CS-to-sales handoff.
- Expansion Motion Architecture — How you identify and execute upsell and cross-sell opportunities within the installed base. Includes: signal detection (usage thresholds, feature adoption, stakeholder growth), routing (CS-led vs. sales-led), and measurement (expansion ARR, expansion rate by segment). The companies with the best NRR have a deliberate expansion motion, not opportunistic upselling.
- CS Coverage Models — How you assign CSMs to accounts. Segmented by ARR, health score, lifecycle stage, or hybrid. The coverage model determines touch frequency, escalation thresholds, and the ratio of proactive vs. reactive engagement. Getting this wrong means your best CSMs are spending time on accounts that do not need them.
- NRR Mechanics — Net Revenue Retention is the output metric. The inputs are: GRR (base retention), expansion rate, contraction rate, and churn rate. RevOps owns the measurement infrastructure and the signal architecture that enables CS and sales to influence each component. Improving NRR requires knowing which input is the problem.
- VOC Intelligence Loop — Voice of Customer data collected systematically across all GTM touchpoints — call recordings, support tickets, NPS surveys, QBR notes — analyzed for patterns and routed to product, CS, and leadership. The loop closes when insights drive action, not just reporting. Most companies collect VOC data. Very few close the loop.
Must Be Able To
- Walk through your approach to customer health scoring — what signals you use, how you weight them, what actions they trigger
- Describe a renewal process design end to end, including where most processes break
- Explain the difference between CS-led and sales-led expansion motions and when each is appropriate
- Articulate how you built (or would build) a customer signal architecture from scratch
- Connect health scoring, renewal operations, and expansion motion into a coherent post-sale revenue system
- Explain NRR at the component level — what levers RevOps can pull to improve each input
Revenue Infrastructure
The operating system underneath everything else. CRM architecture, data governance, reporting design, tech stack decisions. Most experienced operators live this daily — the gap is articulation at VP altitude, not knowledge.
Core Concepts
- CRM Architecture Decisions — Object model design, custom vs. standard objects, data model for pipeline vs. customer lifecycle, integration points. The key question is not "which CRM" — it is "how do you design the data model to serve reporting, automation, and decision-making across the full revenue lifecycle?"
- Data Governance — How you maintain data quality without becoming the data police. Includes: required fields and validation rules, duplicate management, data enrichment strategy, hygiene automation, and the cultural component — making accurate data entry the path of least resistance for reps, not a burden imposed by ops.
- Reporting & Analytics Design — The difference between dashboards that get looked at and dashboards that drive decisions. Metric hierarchy: leading indicators (pipeline created, meetings booked) feed lagging indicators (bookings, revenue). Executive reporting should answer "what do I need to decide or act on" — not "what happened."
- Tech Stack Philosophy — Build vs. buy decisions, integration architecture, vendor evaluation criteria, stack consolidation. The VP-level question: how do you design a tech stack that serves the revenue process rather than the other way around? At early-stage SaaS, fewer tools configured well beats a full enterprise stack configured poorly.
- BI Tool Decisions — When spreadsheets are enough, when you need a BI layer (Looker, Tableau, Mode), and when embedded analytics in the CRM is the right answer. For most Series A-C companies: Salesforce reports and dashboards plus Google Sheets covers 80% of needs. Add a BI tool when you need cross-system data joins or self-serve analytics for non-technical stakeholders.
Must Be Able To
- Describe how you design a CRM data model to serve reporting and automation across the full revenue lifecycle
- Articulate your philosophy on data governance — how much is enough and how you enforce it
- Explain the difference between reporting that displays data and reporting that drives decisions
- Walk through a tech stack evaluation and build vs. buy decision with real tradeoffs
- Describe how you approach stack consolidation — what you cut, what you keep, how you manage the transition
- Have a POV on Salesforce vs. HubSpot for a Series B company and explain the reasoning
Demand Gen & Marketing Ops
The gap most likely to cost you roles at early/mid-stage SaaS where RevOps often owns or co-owns the marketing ops function. Deep enough to be credible, not operational depth.
Resources
| Resource | Format | Time |
|---|---|---|
| Chris Walker / Refine Labs — Revenue Vitals podcast | Audio | 2-3 hrs |
| Focus: Pipeline attribution, dark funnel, demand gen vs. lead gen distinction | Audio | — |
| HubSpot Marketing Blog — demand gen fundamentals | Blog | 1 hr |
| Terminus / Metadata.io blog — attribution model guides | Blog | 1-2 hrs |
| HubSpot Academy — Marketing Hub fundamentals (free, skim for vocab) | Course — skim | 1-2 hrs |
Core Concepts — Demand Gen Fundamentals
- MQL vs. SQL vs. PQL — Marketing Qualified Lead (meets scoring threshold), Sales Qualified Lead (sales accepted, meets discovery criteria), Product Qualified Lead (usage-based signal in PLG). Most MQL models are broken because they measure activity (downloaded whitepaper) not intent (requested pricing).
- Demand Gen vs. Lead Gen — Lead gen captures existing demand (gated content, paid search). Demand gen creates demand (ungated content, brand, community, events). The distinction matters because lead gen fills the top of funnel with low-intent leads while demand gen builds pipeline quality.
- The Dark Funnel — The buying activity that attribution tools cannot see — podcasts, word of mouth, Slack communities, social media lurking, peer conversations. At early-stage SaaS, the dark funnel often drives more pipeline than tracked channels.
- Lead Scoring — Assigning numerical values to leads based on demographic fit (title, company size, industry) and behavioral signals (page visits, content downloads, email engagement). Breaks down when scoring conflates activity with intent, or when the model is never recalibrated against actual conversion data.
Core Concepts — Marketing Attribution
- Attribution Models — First-touch (all credit to first interaction), last-touch (all credit to converting action), linear (equal credit across all touches), U-shaped (40/40/20 to first and last), W-shaped (adds opportunity creation as a third anchor), time-decay (more credit to recent touches). Multi-touch is theoretically correct but operationally messy at most small SaaS orgs.
- Which Model to Actually Use at Early Stage — Most Series A–C companies do not have the data quality that multi-touch attribution requires. Field mapping is inconsistent, touch history is incomplete, and the dark funnel is invisible to tracking tools. The honest answer: first-touch plus self-reported is more actionable than any multi-touch model until your data infrastructure can support something more sophisticated. First-touch tells you what created awareness. Self-reported tells you what actually influenced the decision. Together they cover the two most important questions without pretending the middle of the funnel is measurable when it isn't. When someone asks your attribution POV in an interview, leading with this earns more credibility than listing all six models and calling it a day.
- Pipeline-Based Measurement — Measuring marketing by pipeline sourced and pipeline influenced rather than MQLs. More credible because it connects marketing activity to revenue outcomes. Pipeline sourced: marketing created the opportunity. Pipeline influenced: marketing touched the opportunity at some point.
- Self-Reported Attribution — Asking prospects "how did you hear about us?" on the demo request form. Often outperforms tool-based attribution at early stage because it captures dark funnel activity. Not mutually exclusive with tool-based — use both.
Core Concepts — MAP Platforms
- Core MAP Capabilities — Email automation, lead nurturing workflows, lifecycle stage management, lead scoring, CRM sync. The marketing automation platform is the operational engine for demand gen execution.
- HubSpot vs. Marketo vs. Pardot — HubSpot: best for SMB through mid-market, strong UX, all-in-one. Marketo: enterprise-grade, complex, expensive, best for sophisticated multi-touch campaigns. Pardot (Salesforce Marketing Cloud Account Engagement): tight Salesforce integration, mid-tier complexity. Decision hinges on scale, existing CRM, and in-house technical capacity.
- Common MAP/CRM Sync Issues — Field mapping conflicts, duplicate records from multiple systems, lead-to-contact conversion logic, lifecycle stage misalignment between MAP and CRM, attribution data loss during sync. These are the operational headaches that RevOps owns even when marketing runs the platform.
- Centralize vs. Leave in Marketing — When RevOps should own marketing ops: small team with no dedicated marketing ops role, MAP and CRM need tight integration, or attribution and pipeline reporting need a single owner. When to leave it: marketing is large enough to have its own ops, or the demand gen motion is highly specialized.
Must Be Able To
- Explain MQL, SQL, PQL — definitions, handoff criteria, and why most MQL models break
- Describe the 6 main attribution models, their tradeoffs, and have a POV on which to use at early stage
- Discuss the dark funnel and explain why self-reported attribution matters
- Describe core MAP capabilities and common MAP/CRM sync issues without going blank
- Have a POV on when to centralize marketing ops under RevOps vs. leave it in marketing
- Articulate the pipeline-based marketing measurement model and why it is more credible than MQL counting
Sales Comp & Org Design
Sales compensation appears on every VP RevOps JD. You will not own it end to end — that sits with the CRO and Finance. You need fluency, not expertise. Org design gives you the framework for answering "how would you structure RevOps at our stage."
Core Concepts — Sales Compensation
- OTE (On-Target Earnings) — Total expected compensation when a rep hits 100% of quota. Composed of base salary + variable (commission). Standard splits: AE 50/50 or 60/40, SDR 70/30, CSM 70/30 or 80/20. The split reflects how directly the role controls revenue outcomes.
- Quota-to-OTE Ratio — The ratio of a rep's annual quota to their OTE. Healthy range: 4x-6x for AEs depending on ACV and sales cycle. A 5x ratio means a rep with $200K OTE carries a $1M quota. Below 4x is overly generous. Above 6x may signal unrealistic expectations.
- Accelerators — Increased commission rates that kick in when a rep exceeds quota. Typically at 100%, 110%, 125% thresholds. They exist to reward overperformance and prevent reps from coasting after hitting target. Poorly designed accelerators cause Q4 stacking or sandbagging.
- Draw vs. Guarantee — A draw is an advance against future commissions (rep pays it back from earned commissions). A guarantee is a fixed amount during ramp with no payback obligation. Guarantees are appropriate for new hires ramping; draws are for experienced reps in new territories.
- Common Failure Modes — Sandbagging driven by caps (reps hold deals for next period), Q4 stacking from poor accelerator timing, CSM churn from misaligned retention incentives, quota inflation disconnected from market reality, and comp plans that reward activity rather than outcomes.
Core Concepts — Org Design
- RevOps Org Scaling Model — 0-10M ARR: one RevOps generalist, prioritize CRM hygiene and basic pipeline reporting. 10-30M ARR: VP/Director plus Salesforce admin (2-3 headcount). 30-75M ARR: add sales ops analyst, consider splitting marketing ops (4-5 headcount). 75M+: functional specialization begins (sales ops, marketing ops, CS ops, BI).
- Territory Design Fundamentals — Carving approaches: geographic, vertical, firmographic (employee count, ARR range), named account, round-robin. Territory balancing prevents comp inequity. Whitespace analysis identifies untapped opportunity. ICP definition (firmographics, technographics, behavioral signals) informs all of the above.
Must Be Able To
- State standard base/variable splits by role and explain why they differ
- Define OTE, quota-to-OTE ratio, and what a healthy ratio looks like
- Explain accelerators, draws vs. guarantees, and 3 common comp plan failure modes
- Describe the RevOps org scaling model by ARR stage
- Discuss territory design approaches at a conceptual level
- Articulate honestly what you own vs. what you inform in comp design
Why Plans Break — The Mechanics
Generic failure modes are not enough. You need to be able to explain the causal chain — which plan design decision creates which behavior. This is what separates a VP who has lived comp design from one who has read about it.
- Caps cause sandbagging — When commission rates cap out at 150% of quota, reps do the math. A deal that closes in December and pushes them to 160% earns the same as one that closes in January and starts the new quarter at 30%. So they hold it. The fix is uncapped accelerators with increasing rates — 1.5x at 100%, 2x at 125%, 3x above 150%. Make overperformance more valuable in the current period than saving it.
- Annual quotas cause Q4 stacking — When quota resets annually, reps who are behind in Q3 become deal-closers in Q4 and sandbagging reps who are ahead defer to January. Both behaviors destroy forecast accuracy. Quarterly quotas with annual accelerator tiers fix the timing mismatch — reps have a reason to close every quarter, not just Q4.
- Misaligned CSM incentives cause churn — When CSMs are compensated on logo retention without expansion metrics, they protect relationships instead of driving value realization. A CSM who avoids hard conversations to protect their renewal number is actually accelerating churn — just on a delay. The fix is a composite metric: GRR as the floor, NRR as the upside. CSMs who expand accounts should earn materially more than those who just retain them.
- Quota inflation disconnects effort from reward — When quota is set by taking last year's number and adding 20% without anchoring to market capacity, territory size, or pipeline data, reps learn that quota is a fiction. They stop using it as a target and start gaming attainment optics instead. RevOps's role is to build the capacity model that grounds quota in reality — accounts, ACV, coverage, win rate — so the number is defensible, not aspirational.
Deal Desk & Quote-to-Cash
Deal desk appears on VP RevOps JDs with increasing frequency as companies move upmarket. Most RevOps operators have lived adjacent to it — they have felt the pain of a deal stuck in legal, a discount that set a bad precedent, or a quote that took three days to get approved. The gap is being able to describe the function, its failure modes, and your philosophy on how to structure it without the conversation going blank.
Core Concepts — Deal Desk
- What Deal Desk Actually Is — The operational function that manages non-standard deal requests: custom pricing, non-standard contract terms, unusual discounts, complex multi-product configurations, or exceptions to standard MSA language. Deal desk exists because most sales orgs cannot be trusted to make these decisions individually at speed without creating pricing inconsistency, legal exposure, or margin erosion. It is a control function that also enables deals — the best deal desks move fast and say yes with guardrails, not slow and say no by default.
- Approval Workflow Design — The core operating question: who needs to approve what, at what threshold, with what turnaround SLA. Standard model is tiered by discount depth and deal size — AE can approve up to 10% discount, manager up to 20%, VP sales up to 30%, CRO above that. Legal approval triggered by non-standard terms, not deal size alone. The trap is building approval chains that are longer than the sales cycle — a four-step approval for a $20K deal signals that the function is optimized for control, not revenue.
- Discounting Governance — Discount management is one of the highest-leverage RevOps interventions. Uncontrolled discounting compresses NRR (discounted customers churn at higher rates), destroys pricing integrity, and trains buyers to wait for quarter-end. Governance means: documented discount tiers, approval authority at each tier, visibility into discount patterns by rep and segment, and a feedback loop to pricing strategy when deals regularly require exceptions at a certain threshold.
- CPQ (Configure, Price, Quote) — Software that automates the generation of accurate quotes for complex or configurable products. Core capabilities: product catalog management, pricing rules and discount constraints, quote document generation, approval routing, and CRM integration. The RevOps role is not to run CPQ day-to-day — it is to own the data model, the approval logic, and the integration to CRM and billing. Common platforms: Salesforce CPQ (now Revenue Cloud), DealHub, Zuora, HubSpot Quotes for simpler use cases.
- Quote-to-Cash Flow — The full lifecycle from quote generation through contract execution, order management, billing, and revenue recognition. RevOps typically owns the front end (CPQ, contract templates, approval workflows) and hands off to Finance at order booking. The handoff is where most problems live — incomplete data, wrong billing terms, products mapped incorrectly — which means RevOps needs to own the data integrity across the seam even if they do not own the downstream systems.
- Contract Operations — Template management, redline tracking, signature workflow (DocuSign, Adobe Sign), and executed contract storage. At early stage, this is often ad hoc. RevOps brings structure: standard templates by deal type, a clear redline authority matrix, and a searchable contract repository that CS and Legal can access at renewal time. The most common failure: contracts get signed and filed in a folder nobody can find when the renewal conversation happens two years later.
When RevOps Owns Deal Desk vs. When Finance Does
At most Series A–B companies, deal desk does not formally exist — RevOps fills the gap informally. At Series C and beyond, it typically lives in one of three places: under RevOps (most common when the CRO has influence over the function), under Finance (when margin control is the primary concern), or as a standalone function reporting to the CRO. The RevOps case is strongest when deal velocity matters most — RevOps can optimize the approval workflow for speed without losing control. Finance tends to optimize for risk management, which slows deals. Know your POV on which model fits which stage.
Must Be Able To
- Describe what deal desk does and why it exists — in one sentence a CRO would recognize
- Walk through a tiered discount approval framework and explain the tradeoffs at each tier
- Explain what CPQ solves and when a company needs it vs. when a spreadsheet is enough
- Describe the quote-to-cash flow and where the RevOps/Finance handoff typically sits
- Articulate the downstream NRR impact of undisciplined discounting
- Have a POV on whether deal desk belongs under RevOps or Finance at Series B vs. Series C
Stakeholder Navigation & Change Management
The #1 thing that separates VP hires from Director hires. RevOps touches every team but controls none of them. Your ability to navigate politics, earn trust, drive adoption, and manage resistance is what determines whether your systems actually get used. This is not soft skills — it is the hardest part of the job.
Core Concepts
- Earning Credibility with the CRO — The first 30 days set the tone. Listen before prescribing. Diagnose before building. Deliver a quick win that solves a visible pain point before proposing a roadmap. The quick win earns the credibility to do the harder work.
- Driving CRM Adoption — The answer is never "mandate it." Make the CRM the path of least resistance: reduce required fields to what actually matters, automate what can be automated, make the data useful to reps (not just leadership), and create positive feedback loops where good data entry produces visible benefits for the rep, not just the manager.
- Managing Cross-Functional Resistance — Sales, CS, Marketing, and Finance all have different priorities and incentive structures. RevOps succeeds by finding shared metrics and building infrastructure that serves multiple stakeholders. When conflict is unavoidable, escalate with data and options, not opinions.
- Change Management for Process Rollouts — New process adoption follows a pattern: executive sponsorship (CRO must visibly support it), clear communication of "why" (not just "what"), training and enablement, enforcement with grace period, and measurement of adoption. Most process rollouts fail at the "why" step — people will follow a new process if they understand why it matters.
- The Political Reality of RevOps — You own a function that produces data people use to evaluate each other. Sales leaders will push back when your data shows problems. Marketing will dispute attribution. CS will challenge health scores. Your credibility depends on being rigorous, fair, and politically aware without being political.
Must Be Able To
- Describe how you earn credibility with a CRO in the first 90 days — with specific actions, not platitudes
- Explain your approach to driving CRM adoption without becoming the data police
- Tell a story about navigating cross-functional resistance where RevOps and another team disagreed
- Walk through how you would roll out a new process or system with high adoption probability
- Articulate the political dynamics of RevOps and how you navigate them
AI-Powered Revenue Systems
One block, four sub-sections. This covers the full lifecycle of building AI systems for revenue contexts — from architecture through production operation. The goal is not to learn tools. It is to develop the vocabulary and mental models that let you describe, design, and operate AI-orchestrated revenue systems. The gap in the market: most operators have used AI tools. Very few have built AI systems. That distinction is where differentiation lives.
Resources
| Resource | Format | Time |
|---|---|---|
| Anthropic — Building Effective Agents documentation | Docs | 2 hrs |
| Simon Willison blog (simonwillison.net) — agent patterns | Blog | 1 hr |
| Anthropic prompt engineering documentation | Docs | 1-2 hrs |
| Co-Intelligence — Ethan Mollick | Book | 2-3 hrs |
| Clay, Common Room, Unify, Gong Forecast product pages | Product pages | 1 hr |
Section A: Architecture & Orchestration
- AI Assistants vs. AI Agents — An assistant responds to a prompt and returns output (ChatGPT answering a question). An agent takes a goal, breaks it into steps, uses tools, and executes autonomously (an AI system that monitors Gong calls, detects risk signals, and routes alerts to the right CSM without human intervention). RevOps example: an assistant summarizes a call. An agent monitors all calls, scores risk, updates the health score, and triggers a Slack alert to the account owner.
- Pipeline Design (Trigger > Process > Output) — Every AI revenue system follows this pattern. Trigger: what initiates the workflow (new call recorded, weekly cron job, deal stage change). Process: what the AI does (analyze transcript, score sentiment, extract competitive mentions). Output: where the result goes (Slack alert, CRM field update, executive report). You should be able to draw this for any system you have built or would build.
- Orchestration Patterns — Sequential (A then B then C), parallel (A and B simultaneously, combine results), conditional (if risk score > X, route to manager; else log and continue), human-in-the-loop (AI recommends, human approves). Most production revenue systems use conditional orchestration because not every signal requires the same response.
Section B: Production Building
- System Prompt Architecture — Production prompts are not conversational prompts. They include: role definition, context injection (data the model needs), task specification, output format constraints, guardrails (what not to do), and examples. A production system prompt for revenue contexts must handle edge cases — empty transcripts, missing data, ambiguous signals — without hallucinating an answer.
- API Integration Patterns — Authentication (API keys, OAuth), webhooks (event-driven triggers), payload design (what data you send and receive), error handling (rate limits, timeouts, malformed responses), and retry logic. Know the full data flow for any pipeline you have built: what triggers it, what data moves where, what transforms happen, and where it breaks under load.
- Data Transformation Layer — The unsexy but critical layer between raw data and AI input. How you clean, structure, and route data before it hits a model. Includes: field extraction from API responses, data normalization, context window management (what fits, what gets truncated), and schema design for AI consumption.
- Workflow Automation Decisions — When to use Zapier (non-technical users, simple triggers, quick setup), Google Apps Script (Google Workspace native, moderate complexity, free), or direct API integration (full control, complex logic, requires engineering). The decision depends on who maintains it after you leave, complexity of the logic, and cost at scale.
Section C: Operations & Evaluation
- Measuring AI System Effectiveness — How do you know it is working? Define success metrics before building: alert accuracy rate, false positive rate, time saved per user, decisions influenced. Track prompt versioning — what changed, why, and what improved. Build feedback loops where users can flag bad outputs.
- Iteration Patterns — Prompt versioning (track what changed and measure impact), output sampling (review a random sample of outputs weekly), edge case collection (log the failures and use them to improve prompts), and A/B testing (run two prompt versions in parallel and compare quality). This discipline is what turns a demo into a production system.
- Cost Modeling — API token economics: what does each pipeline run cost, how does cost scale with volume, and when does a $200/month API bill become a $2,000/month problem. Build vs. buy at different scales — when a native Gong feature replaces your custom build, and when it does not.
Section D: Governance & Change Management
- Getting Teams to Trust AI Output — The adoption challenge is not technical — it is cultural. Start with augmentation (AI assists human decision) not automation (AI replaces human decision). Show the work — make AI reasoning visible and auditable. Build trust incrementally with high-accuracy, low-stakes use cases before expanding scope.
- Security, Privacy & Compliance — PII handling in AI pipelines (what data goes to which model, what gets stored), data retention policies, vendor risk assessment for AI tools (where does your data go, who can access it, what are the terms). A CRO or CFO will ask about this — know the headlines, not the details.
- AI Tool Landscape — Know the current state of AI-native GTM tools: Clay (data enrichment and outbound automation), Common Room (community and product signal detection), Unify (multi-channel outbound orchestration), Gong Forecast (AI-assisted deal scoring), Clari (revenue intelligence and forecast). Know what problem each solves and where custom builds fit in the landscape.
Must Be Able To
- Draw a 3-node AI pipeline (trigger > process > output) and explain it to a CRO without mentioning tool names
- Explain the difference between an AI assistant and an AI agent using a RevOps example
- Write a production-grade system prompt for a revenue context from scratch
- Whiteboard the full data flow for a revenue AI pipeline including failure modes
- Articulate where AI genuinely fails in revenue contexts: hallucination risk, data quality dependency, interpretability
- Name 5 AI-native GTM tools and explain what problem each solves
- Answer "how are you using AI in your RevOps practice?" in a way that is specific, credible, and differentiated
Knowledge without performance gets you to "seems smart." It does not get you an offer. This section converts knowledge into retrievable, pressure-tested delivery.
Story Bank — 8 Domains
Every story must follow this format: Situation (1 sentence) > What you specifically did (2-3 sentences, I not we) > Measurable result > What you would do differently or extend next.
The 8 Required Story Domains
| Domain | Story Prompt |
|---|---|
| Pipeline Architecture | Describe a pipeline structure or stage gate you designed. What problem did it solve? What did conversion or forecast accuracy look like before vs. after? |
| Forecasting | Walk me through how you built or improved a forecast process. How did you handle data quality issues? What was the accuracy outcome? |
| Tech Stack | Tell me about a significant systems decision — build vs. buy, integration design, or stack consolidation. What were the tradeoffs and how did you decide? |
| GTM Alignment | Describe a time you drove alignment across Sales, CS, and Marketing on a shared metric or process. What was the resistance and how did you navigate it? |
| Reporting & Intelligence | Tell me about a reporting system or executive deliverable you built. How did you ensure it drove decisions rather than just displaying data? |
| Customer / CS Ops | Walk me through your approach to customer health scoring or renewal operations. What signals did you use and how did it affect retention outcomes? |
| Data Integrity | Describe a data quality problem that was affecting revenue decisions. How did you diagnose it and what did you fix? |
| Built Something Novel | What is the most sophisticated or creative thing you built in RevOps? Explain the problem, the architecture, and why it mattered — to someone who has not seen it before. |
Story Quality Filter
Run every story through this before locking it. If it fails any check, it is not done.
| Check | Pass | Fail |
|---|---|---|
| Does it have a specific metric or measurable outcome? | Reduced forecast variance by 30% | Improved our process |
| Is "I" the subject, not "we"? | I designed and shipped... | We worked together to... |
| Can you tell it in under 2 minutes? | Timed at 90-120 seconds | Rambles past 3 minutes |
| Does it include what you would do differently? | Shows self-awareness and growth | Sounds like a press release |
| Does it demonstrate ownership, not just participation? | You made the call or drove the outcome | You were in the room when it happened |
Opinion Bank & Question Bank
The Opinion Bank
Strong VP candidates have convictions — not recited frameworks but actual POVs on what is broken and why. Prepare 4-5 genuine opinions. These come up in every senior interview.
- What is actually broken in most SaaS pipeline reviews — and what does a genuinely useful one look like?
- Where do most RevOps teams waste the most time with the least return?
- What is your philosophy on CRM data quality — how much is enough and how do you enforce it without becoming the data police?
- Where does AI genuinely help in RevOps right now vs. where is it still theater?
- What is your take on the MQL — is it still a useful construct or has it outlived its purpose?
- Where is the RevOps function actually oversold — what problems does it get credit for solving that it cannot actually fix on its own?
Honest Answer Templates
Three templates for your known gaps. Memorize the structure, personalize the content.
- Template 1 — No direct experience: "I don't have deep experience with [X]. Here's how I would think through it: [framework or first principles approach]. And here's what I would do in the first 30 days to close that gap: [specific actions]."
- Template 2 — Adjacent experience: "I have adjacent but not direct experience. At [company], I [adjacent thing you did]. The principles transfer — [explain how]. The gap is [honest gap]. Here's how I would close it: [specific plan]."
- Template 3 — Growth edge: "That's a growth edge for me. Here's the foundation I would build from: [what you do know]. And here's what I would need to learn or hire for: [honest assessment]."
Questions You Must Answer Cold
Foundational
- Walk me through how you think about RevOps — what does the function own and what does it enable?
- What is the first thing you do when you join a new company as VP RevOps?
- How do you prioritize when everything is on fire and your team is small?
- What does good look like for a weekly pipeline review?
Technical — RevOps Core
- How do you build a forecast process when the CRM data is unreliable?
- Walk me through your approach to pipeline stage gates and exit criteria.
- How do you think about the RevOps tech stack at a 30-person GTM org?
- What is your take on Salesforce vs. HubSpot for a Series B company?
- How do you approach territory design and quota setting?
Technical — Demand Gen & Marketing Ops
- How do you measure marketing's contribution to pipeline — what attribution model do you use and why?
- Where does the MQL break down and what would you replace it with?
- Walk me through how you would set up or audit a marketing automation platform.
- How do you align marketing and sales on lead definitions and handoff criteria?
Strategic & Leadership
- How do you get a Sales team to actually use the CRM the way you need them to?
- How do you earn credibility with a CRO in the first 90 days?
- Tell me about a time RevOps and Sales had a major disagreement — how did it resolve?
- How are you using AI in your RevOps practice right now?
- Where do you think RevOps is headed over the next 3 years?
Mock Interviews — 3 Sessions
The gap between "I know this" and "I can say this cleanly under pressure" is large and it only closes with reps. This block is entirely about performance, not study. Do not skip it or abbreviate it. Sequentially last — needs everything else loaded first.
Three Required Sessions
| # | Session Focus | Time |
|---|---|---|
| 1 | Story bank delivery — run all 8 stories out loud, recorded. Note every hedge, vague moment, or lost metric. Rewrite and repeat the weak ones. | 75 min |
| 2 | Technical and scenario questions — unit economics, attribution, forecasting, tech stack. Have someone push back on your answers. Do not let yourself get away with vague. | 60 min |
| 3 | Full simulation — 45-minute mock with a trusted peer playing the CRO or VP role. No notes. Debrief immediately after on the 3 weakest moments. | 75 min |
Debrief Protocol
- After each session, identify the 3 weakest moments — where you hedged, went vague, or lost the thread
- Rewrite those answers immediately while the discomfort is fresh
- Record yourself delivering the rewritten version and listen back
- Repeat until the answer is clean, owned, and under the time limit
The Performance Standard
You are ready when you can deliver any story or answer any question from the question bank without notes, with a specific metric, in under 2 minutes, and a CRO-level follow-up question does not make you flinch.
Scan this before you walk in. Not a substitute — a retrieval trigger for everything you already studied.
- LTV:CAC: 3:1+ healthy. Below 1:1 = losing money.
- Payback: Under 18mo SMB, 24mo mid-market.
- Magic Number: Above 0.75 = invest. Below 0.5 = fix.
- Rule of 40: Growth + margin > 40%.
- GRR/NRR: GRR < 85% = problem. NRR > 120% = elite.
- Bowtie: Acquisition left, retention right. RevOps owns both.
- ARR Waterfall: New + expansion - contraction - churn.
- Coverage: Named, territory, round-robin, hybrid.
- PLG vs. Sales-Led: Changes everything — data model, metrics, handoffs.
- Categories: Commit >90%, Best Case 60-80%, Upside 30-50%.
- Stage Gates: Verifiable criteria, not rep judgment.
- Reviews: Diagnostic, not theater. Evidence over narrative.
- Health Scoring: Architecture > score. What feeds it, what it triggers.
- Renewals: Break at CS/sales handoff.
- NRR Levers: GRR + expansion - contraction - churn.
- CRM: Data model serves decisions.
- Governance: Path of least resistance.
- Reporting: "What do I decide" > "what happened."
- Attribution: 6 models. Multi-touch right in theory, messy in practice.
- Dark Funnel: Self-reported catches what tools miss.
- MAP: HubSpot/Marketo/Pardot. Sync issues are your problem.
- Splits: AE 50/50, SDR 70/30, CSM 70/30. Quota-to-OTE 4x-6x.
- Failures: Sandbagging, Q4 stacking, misaligned CSM incentives.
- Org: 0-10M = 1. 10-30M = VP + admin. 75M+ = specialize.
- Deal Desk: Non-standard deals need a control function that enables velocity, not kills it.
- Discounting: Tiered approvals by depth. Undisciplined discounting compresses NRR.
- CPQ: Automates quote accuracy and approval routing. Own the data model, not the day-to-day.
- Q2C: RevOps owns the front end. Handoff to Finance at booking. Data integrity across the seam.
- Trust: Listen > diagnose > quick win > roadmap.
- Adoption: Path of least resistance. Useful to reps.
- Rollouts: Sponsor > why > training > enforce > measure.
- Agent vs. Assistant: Responds vs. executes autonomously.
- Pipeline: Trigger > Process > Output.
- Tools: Clay, Common Room, Unify, Gong Forecast, Clari. (Verify current landscape before interviews.)
- Stories: 8 domains. Under 2 min. "I" not "we." Metric included.
- Opinions: 6 prompts. Lived, not manufactured. Include the hard one.
- Mocks: 3 sessions. Record. Debrief 3 weakest. Repeat.