You have 17 monitoring tools, 4 security platforms, 3 compliance workflows, and a quarterly governance review that produces a slide deck nobody references after the meeting. You still can’t answer one question: “Are we resilient?”
That’s not a technological gap. It’s a structural one. And it’s costing enterprise more every quarter in delayed resolution, duplicated effort, compliance scrambles, and leadership decisions made on incomplete data.
This checklist exists to help you assess honestly and internally whether enterprise is ready to operate as a unified resilience function, or whether still managing infrastructure, security, and compliance as separate problems that keep colliding at on a bridge call.
Go through these 20 questions with your leadership team. Score yourselves. No “No” isn’t a failure, it’s a gap you now have visibility into. And visibility is exactly what a Resilience Operating Centre is designed to deliver.
Scoring: Yes = 2 | Partially = 1 | No = 0
Governance & Strategy
Your tools are only as good as the decision-making structure sitting above them. If your resilience strategy lives in a slide deck that nobody references after the Board meeting, everything downstream — teams, tools, processes — is running without a compass.
- Has your C-suite committed to resilience as a unified business priority not just a cybersecurity budget line?
Most Boards have signed off on cybersecurity spending. Fewer have committed to resilience as a combined discipline covering operations, security, and business continuity together.
If your CEO hears “resilience” and thinks “that’s the CISO’s problem,” you don’t have executive alignment, you have a security budget with no operational mandate. A ROC requires the Board to treat resilience the way it treats revenue: as something that crosses every function and reports to the top.
- Have you defined what your ROC actually needs to cover and how it will operate?
“We need a ROC” is not a scope statement. Will it be insourced, outsourced, or hybrid? Will it cover all group companies or start with critical business units?
If you haven’t answered these questions specifically, with budget implications attached you’re still in the aspiration phase. The enterprises that succeed with ROC adoption define the operating model before they select the platform.
- Can your operations and security teams tell you right now what level of risk the business is willing to accept?
Risk appetite is usually a Board-level statement: “We have moderate risk tolerance.”
But can your on-call engineer translate that into an operational decision at 3 AM? If risk appetite lives only in a governance document and not in your alerting rules, escalation paths, and incident response playbooks, it’s not operational. It’s decorative.
- Do your security, IT operations, compliance, and legal teams operate as one resilience function or as four separate departments that occasionally overlap on a bridge call?
If your security team reports to the CISO, your operations team reports to the CTO, your compliance team reports to Legal, and they meet quarterly in a governance review that produces a slide deck but no operational changes, your resilience is governed by org chart, not by strategy.
A ROC requires a cross-functional team with shared objectives, shared data, and shared accountability. Not shared meetings. Shared outcomes.
- If a regulator walked in tomorrow and asked you to demonstrate alignment with NIST CSF, ISO 27001, or DORA could you do it without a multi-week scramble?
If demonstrating compliance requires your team to pull evidence from four different systems, reconcile conflicting data, and package it into a presentation over two weeks, you’re not aligned.
You’re retroactively constructing the appearance of alignment. Continuous compliance where the evidence is always current because the systems are always monitoring is what separates enterprises that survive audits from enterprises that dread them.
People & Processes
The most expensive tool in your stack is useless if the person receiving the alert doesn’t know what to do with it, who to call, or how to act without waiting for someone else’s permission
- Do you have named individuals in operations, security, and compliance who are personally accountable for driving the resilience agenda within their teams?
Every successful ROC implementation has internal champions who owned the narrative before the platform was deployed. They’re senior leaders within each function who believe in the unified model and actively push their teams toward it. If you don’t have these people identified and empowered today, your ROC adoption will stall at the first sign of organizational resistance and there will be resistance, because silos are comfortable.
- Does your incident response plan cover the scenario where a security event causes an operational outage that triggers a compliance violation all simultaneously?
Most enterprises have an incident response plan. It usually covers security incidents. It might cover operational outages. It almost never covers the cross-domain scenario where all three collide which, in a cloud-native, API-driven world, is exactly the scenario that causes the most damage. If your IR plan doesn’t have a playbook for “the security breach that took down the payment service and violated PCI-DSS at the same time,” it’s incomplete precisely where it matters most.
- Are your teams trained to work within AI-driven, unified workflows or are they still following tool-specific runbooks written before your architecture changed?
If your incident response still starts with “open Datadog, then open Splunk, then open ServiceNow, then call the on-call engineer” you’re running a manual process on top of tools that were supposed to automate it. AI-driven ROC workflows mean the platform does the correlation, enrichment, and recommendation before a human touches it. If your team hasn’t been trained to work in that model, they’ll default to what they know and what they know is the old, slow way.
- When a P1 fires, does the right information reach the right people automatically or does someone have to manually figure out who to call?
If your escalation path depends on institutional memory “for payment issues you call someone, but if it’s the EU region it’s another team” you don’t have a communication protocol. You have a human routing table that breaks every time someone changes roles or goes on leave. A ROC-ready communication model means escalation is automated based on incident type, severity, and business impact not based on who happens to know the right phone number.
- If your infrastructure team and your security team investigated the same incident right now, would they be looking at the same data?
If your operational telemetry lives in Datadog and your security events live in Splunk and your compliance data lives in a GRC tool and your incidents live in ServiceNow, you have four sources of truth. Which means you have zero. Every minute your team spends switching between platforms and cross-referencing timestamps on a bridge call is a minute the ROC model eliminates because all the data is already in one place, already correlated, already contextualized.
- Can you see what’s exposed to the internet right now not what was exposed when you last ran a scan?
“We run vulnerability scans weekly” is not real-time visibility. It’s a snapshot from previous event. In a world where new containers spin up hourly and cloud configurations drift between intended and actual a weekly scan creates a visibility gap measured in days. A ROC requires continuous, real-time awareness of your external attack surface.
- When your monitoring tools fire 500 alerts, can your platform tell you which 3 actually matter or does a human have to triage all 500?
If your answer is “we have alert rules and thresholds,” that’s filtering, not correlation. Correlation means the platform understands that 487 of those alerts are symptoms of 3 root causes, consolidates them into 3 incidents with full context and business impact, and presents your engineer with a prioritized list and recommended actions. If your team is still manually triaging alert queues, your tooling is creating work instead of reducing it.
- Are your security controls actually generating the data your detection systems need or are some of them deployed but effectively silent?
Agents go stale. Configurations drift. Telemetry pipelines break silently. A firewall that’s deployed but not generating logs doesn’t exist as far as your detection systems are concerned. If your ROC can’t confirm continuously, automatically, that every control is active, current, and feeding data into the correlation engine, you have blind spots disguised as coverage. Blind spots don’t show up on dashboards.
- When your CISO presents risk to the Board, do they speak in severity scores or in dollars?
“We have 47 critical vulnerabilities” tells the Board nothing actionable.
“We have $3.2 million in quantified risk exposure concentrated in our payment infrastructure, with a 12% probability of materialization in the next 90 days” tells them exactly what to prioritize.
If your risk communication is built on Critical/High/Medium/Low labels, your Board is making resource decisions on insufficient data. Risk quantification translates security and operational risk into the language that gets budgets approved and investments funded.
Metrics & Testing
If you can’t measure your resilience, you can’t improve it. And if you haven’t tested it under pressure, you don’t have it, you have a theory.
- Do you track Mean Time to Detect and Mean Time to Resolve formally with baselines, trends, and improvement targets?
If you don’t measure MTTD and MTTR, you can’t prove anything you’ve invested in has made a difference. These two metrics are the heartbeat of any ROC. They tell you whether detection is getting faster, resolution is getting more efficient, and the gap between “incident fires” and “incident resolved” is shrinking. If you’re tracking them with gut feel instead of formal baselines, you’re flying blind on the metrics that matter most.
- Have you tested your resilience under realistic conditions in the last 12 months not just discussed it in a tabletop exercise?
Has your team simulated a cross-domain incident, a security breach triggering an operational outage under realistic conditions? Have you measured actual recovery time against your stated Recovery Time Objective? If your last real test was over 12 months ago, your resilience confidence is based on documentation, not evidence. Documentation doesn’t hold up at 2 AM.
- Do your security and operational controls actually stop things or do they just alert and log?
You’ve invested heavily in controls. Have you tested whether they actually prevent, detect, and respond to the scenarios they were designed for? A control that generates an alert but doesn’t trigger the right workflow creates visibility without action. A ROC requires validated controls tested against real scenarios and confirmed to do what you think they do.
Vendor & Third-Party Management
Your resilience perimeter doesn’t end at your network boundary. It extends to every vendor, SaaS provider, and API integration your business depends on.
- If your most critical vendor went down right now, do you know exactly which of your business services would be affected?
Most enterprises have a vendor inventory. Fewer have criticality mapping. Even fewer have one that’s current. If your vendor risk assessment is a spreadsheet that procurement maintains and your operations team has never seen, you’re managing third-party risk on paper while your actual dependencies live in production. A ROC needs to understand vendor blast radius the same way it understands internal blast radius in real time, mapped to business services.
- Do your vendor contracts define shared incident response responsibilities who detects, who notifies, who remediates, within what timeframe?
SLAs are not incident response plans. “99.9% uptime” doesn’t tell you what happens when the 0.1% hits. Does your vendor notify you when they detect an issue, or do you find out when your customers complain? If your third-party risk management is contractual but not operational, your ROC has a blind perimeter.
- Do you have real-time visibility into your critical vendors’ security posture, or do you rely on their annual self-assessment questionnaire?
If your vendor risk assessment is an annual exercise where vendors fill out a form, your team reviews it three months later, and the results sit in a GRC tool until next year, you’re managing vendor risk based on a snapshot that’s already outdated by the time you read it. Continuous vendor monitoring is what separates enterprises that manage vendor risk from enterprises that discover it during an outage.
What Your Score Tells You
35–40: Ready to Formalize. Strong foundations across all five pillars. You’re ready to consolidate into a unified ROC. Next move: platform selection and operational design.
20–34: Foundations Exist, Gaps Are Costing You. Real investments, but the seams between operations, security, and compliance are where incidents get expensive. This is the highest-ROI zone for ROC adoption enough maturity to move fast, enough gaps for immediate value.
Below 20: Start With Governance. Gaps are structural. Before selecting technology, secure executive commitment, build your cross-functional team, and define risk appetite in operational terms. Technology accelerates strategy it doesn’t replace it.
What to Do With This Score
- Use this as the agenda for your next leadership review. The pillar scores tell you exactly where your operating model needs investment. Don’t start with tools. Start with the lowest-scoring pillar.
- Focus on Technology, Metrics, and Vendor Management. If your combined score across those three pillars is below 15, the case for unifying security operations into a broader resilience model is already made by the numbers.
- Every “No” and “Partially” is a line item in your next budget proposal — grounded in industry-standard frameworks, not vendor marketing. This checklist gives you the evidence base your CFO needs to approve the investment.
- Walk your leadership team through it in 45 minutes. Let them score themselves. The conversation that follows will be the most productive resilience discussion your organization has had because the gaps are self-identified, not sold to them.
The Bigger Picture
This checklist is structured across five pillars Governance & Strategy, People & Processes, Technology & Infrastructure, Metrics & Testing, and Vendor & Third-Party Management, because resilience is not a technology problem with a technology answer. It’s an operating model problem that requires alignment across governance, people, tools, measurement, and partners.
The enterprises that score highest aren’t the ones with the most tools. They’re the ones where leadership treats resilience as a unified discipline, where teams are trained to work across domains, where data is centralized and correlated, where metrics are tracked formally, and where vendor risk is managed operationally not just contractually.
If your score revealed gaps, those gaps are your roadmap. Each question that scored “No” or “Partially” points directly to a specific capability that a Resilience Operating Centre is designed to deliver. The checklist doesn’t just assess where you are. It shows you exactly where to go next.
And if you want to take the next step evaluating vendors who can close these gaps pair this readiness assessment with a vendor evaluation checklist that maps solution capabilities to your specific weaknesses. Your readiness score tells you what you need. Your vendor score tells you who can deliver it. Together, they form the business case that gets funded.
iStreet is an AI-powered Resilience Operating Centre that unifies AIOps, SecOps, and Compliance into a single platform delivering unified incident correlation, AI-driven root cause analysis, resolution intelligence, capacity forecasting, automated security triage, and continuous compliance through one console. If this checklist revealed gaps, iStreet ROC was built to close them.














