
Standard post
October 15, 2020
What is a PMO? Understanding the Three Types (Supportive, Controlling, Directive)
October 19, 2025I watched a PMO director in Chicago nearly melt down during a Q3 steering committee. The executive team had just announced that, effective immediately, everything would be “Agile”—apparently after reading a glowing post about Netflix. Half the portfolio was regulatory work with fixed deliverables (SOX controls, vendor due-diligence updates), the teams had zero Agile experience, and the CEO still wanted a 180-day Gantt. You can guess the rest.
Three months later: schedules slipped, standups turned into status meetings, auditors were unimpressed, and that PMO director started taking recruiter calls. I don’t blame him.
The pattern is common: pick a methodology by fashion or by memo, not by fit.
What You’re Actually Choosing
Waterfall isn’t a fossil. If you’re shipping a medical device, you need documented phases, traceability, and sign-offs—think 21 CFR 820, a Design History File, and a requirements–verification trace matrix. Tell the FDA you’ll “pivot based on learnings” and you’ll be writing CAPAs for months.
“Agile,” meanwhile, has become a label for anything with a daily standup. If you freeze scope up front, never put increments in front of users, and run 12-week “sprints,” that’s Waterfall with a standup. Real Agile (Scrum, Kanban, XP) assumes uncertainty and requires tight feedback loops, working increments, product ownership with teeth, and teams that can self-organize. The ceremonies are easy; the mindset is not.
Hybrid approaches aren’t a cop-out; they’re reality. Most portfolios blend predictable work (e.g., ERP patching, data-center moves) with discovery work (new product features). Your framework should handle both without forcing everything through one pipe.
The Questions That Actually Matter
1) Do you truly know the end state?
Replacing a known legacy module? Great—define phases, lock scope, map verification to requirements, and go.
Exploring a new customer-facing feature with unknown desirability? Front-loading a 200-line WBS is expensive fiction. Plan thinly, ship small, learn fast.
2) What does compliance require?
If you operate under FDA, SOX, HIPAA, PCI DSS, or internal audit gates, you may need documented approvals and stage exits. You can still be iterative—just make your increments auditable and your controls explicit (definition of done includes evidence, not just code).
3) Can your teams actually run the model?
Scrum needs backlog hygiene, a capable Product Owner, and cross-functional teams. If your devs, QA, and ops live in different cost centers and all dependencies go through tickets, start with Kanban-on-top-of-ops or a staged hybrid. Train first; don’t throw juniors into the deep end and call it “empowerment.”
4) What does leadership expect to see?
If your CEO needs a 6-month plan with budget burn and critical path, Agile will feel like heresy unless you translate it: product roadmaps, release forecasts, burn-up, and probabilistic delivery ranges. If leadership truly tolerates change, stop pretending you can predict Q4 in January.
How It Works When It Works
The best PMOs I know don’t crown a single methodology. They set a portfolio framework and let projects pick from a small, supported set.
- Infrastructure/rollouts: Waterfall or stage-gate with clear change control. You don’t “iterate” a data-center migration cutover.
- New product work: Scrum or dual-track Agile (discovery + delivery) with feature flags and telemetry.
- Ops/process improvement: Kanban with WIP limits, service classes, and aging-in-WIP metrics.
- Regulated builds: Hybrid—Agile increments inside phases, with documented stage exits and verification packs.
The point isn’t purity; it’s fit.
When You Get It Wrong (and you will)
Expect a dip. Some of that is normal change cost; some is your signal to adjust. Watch for specifics:
- Teams spend >20% of time producing artifacts no one reads (Jira fields for the sake of fields).
- Audits fail on evidence, not outcomes (no trace from requirement → test → result).
- “Sprint reviews” don’t show working software, just slide decks.
- Dependencies stall every iteration (test environments shared across five teams).
- Your top ICs opt out of ceremonies to “get real work done.”
If these persist after two or three iterations/releases, change the method—or the constraints around it. Methodology isn’t doctrine.
What Actually Matters
Stop asking if you’re “doing Agile right” or if your PRINCE2 artefacts are museum-quality. Ask:
- Are we shipping the right things sooner?
- Can we explain our plan, uncertainty, and evidence to executives and auditors without theater?
- Are teams getting more effective and less burned out?
Start with one project. Choose a method intentionally. Make success criteria explicit (time to first value, defect escape rate, stakeholder NPS, audit findings closed on first pass). Inspect, adapt, try again. (Yes, I learned this the hard way.)
A Minimal, Practical Playbook
- Classify projects by uncertainty and constraint. (Known scope vs. discovery; regulated vs. unregulated.)
- Offer two or three supported paths (e.g., Stage-Gate, Scrum, Kanban) with templates, tooling, and training.
- Define a translation layer for executives: roadmaps, quarterly outcomes, forecast ranges, and one page per project that a CFO can read.
- Bake in evidence. Definition of Done includes tests, logs, and approvals where required.
- Measure a few real metrics: lead time, throughput, on-time to forecast range, escaped defects, team health. Kill vanity charts.
- Run quarterly retros at the portfolio level and adjust the framework—not just the teams.
If you deliver valuable outcomes without grinding people down, no one will care which methodology was on the whiteboard. They’ll ask when you can do it again—and that’s the right question.

