
Most businesses that invest in endpoint security do so with the right intentions. They deploy the tools, hand them over to their IT team or managed provider, and assume the job is largely done. What happens next is often where the value gets lost.
Endpoint security tools generate a significant amount of data. Without a clear framework for what to measure, that data either goes unreviewed or gets summarised in ways that don’t translate into meaningful action. The result is a security function that feels active — alerts are firing, dashboards are populated — but isn’t actually improving the organisation’s risk posture over time.
Monthly metrics change that. Tracking the right measures consistently gives you visibility into whether your endpoint security is working, where the gaps are, and whether things are genuinely getting better or just staying the same.
This article covers the metrics that matter most, what they tell you, and how to use them to drive real improvement in your endpoint security posture.
Why Endpoint Metrics Matter Beyond the Dashboard
Most endpoint detection and response tools include a dashboard of some kind — active threats, detected events, devices enrolled, patch status. These are useful at a glance, but they’re not the same as a structured monthly measurement programme.
A dashboard tells you what’s happening right now. Monthly metrics tell you whether things are improving, deteriorating, or staying flat over time. Trends matter more than snapshots in endpoint security — a single month with a high threat detection count could indicate an active campaign or simply a noisier detection environment. Looking at the same metric across six months tells you something more useful.
Monthly measurement also creates accountability. When metrics are reviewed regularly, gaps that might otherwise be quietly tolerated become visible and require a response. Our post on endpoint security that pays off explores how to think about the return on endpoint security investment, and consistent measurement is central to demonstrating that return.
The Metrics Worth Tracking Every Month
1. Endpoint Coverage Rate
This is the most fundamental metric of all: what percentage of the devices in your organisation have your endpoint security agent installed and active?
A coverage rate of 95% sounds reassuring. But if your organisation has 200 devices, that means ten devices — potentially including laptops belonging to senior staff or remote workers in other locations — are operating without protection. For an attacker who finds one of those unprotected devices, the gap is as useful as if none of the others were protected.
Track coverage rate by location as well as overall. Businesses with offices across multiple countries frequently find that their UK headquarters has strong coverage while remote or international offices lag behind. If you have European offices, your european it services should be delivering consistent endpoint coverage across those locations — not applying best-effort coverage outside the UK.
Your target should be 100%. Anything below that needs an explanation and a remediation date.
2. Patch Compliance Rate
Unpatched software is one of the most consistently exploited attack vectors. Your patch compliance rate measures the percentage of your endpoints that are running current, approved patch levels for their operating system and key applications.
Track this monthly and break it down by:
- Operating system patches vs application patches
- Device type (laptops, desktops, mobile devices)
- Location or office
A high overall compliance rate can mask significant problems in a subset of your estate. Devices that are frequently offline — laptops used by travelling staff, for example — often fall behind on patching because they’re not connected to your management platform when updates are pushed. These are precisely the devices most likely to be used in higher-risk environments.
3. Mean Time to Detect (MTTD)
Mean time to detect measures how long it takes, on average, from a threat first appearing in your environment to your security tools identifying it. This is a measure of your detection capability — how quickly your tools and processes spot malicious activity.
Tracking this monthly allows you to see whether your detection capability is improving over time, and whether changes to your tooling, configuration, or monitoring processes are having a measurable effect. Our article on EDR vs antivirus vs XDR is useful context here — understanding what your tools are actually designed to detect, and through what mechanism, shapes how you interpret your MTTD figures.
4. Mean Time to Respond (MTTR)
Mean time to respond measures how long it takes from detection to containment. Even a fast MTTD is less valuable if your response process is slow — during the gap between detection and containment, an active threat can continue to cause damage.
Track MTTR separately for automated responses (where your tooling acts without human intervention) and manual responses (where an engineer takes action after reviewing an alert). A large gap between the two suggests that your automated response configuration may be under-used or misconfigured, and that human response processes may benefit from clearer playbooks or additional resource.
5. Alert Volume and Alert-to-Incident Ratio
Total alert volume is worth tracking month on month, but the more useful metric is the ratio of alerts that result in confirmed incidents versus those that turn out to be false positives.
A very high false positive rate has a real operational cost — it trains your team to treat alerts with less urgency, which creates exactly the kind of alert fatigue that attackers exploit. If your endpoint tools are generating hundreds of alerts a month but only a small fraction represent genuine threats, that’s a configuration and tuning problem worth addressing.
Conversely, a very low alert volume isn’t necessarily good news. It might reflect a well-tuned environment with few threats — or it might reflect tools that aren’t detecting what they should be. Understanding which it is requires comparison against external threat intelligence and regular tool reviews. Our article on the importance of secure IT defences against cyber criminals gives useful context on the current threat landscape that shapes what a reasonable alert volume looks like.
6. Devices with Active Threats or Unresolved Alerts
How many devices in your estate currently have an active threat or an unresolved alert that hasn’t been investigated or remediated?
This metric is a direct measure of how effectively your response process is clearing the queue. Unresolved alerts age quickly — a device with an unresolved detection from three weeks ago is a device that may have been compromised for three weeks. Track both the count and the age of unresolved items, and set internal SLAs for how long different severity levels can remain open.
For businesses with distributed teams, this metric is particularly worth breaking down by location. Remote staff and those in international offices are less visible to your IT team, and their devices are more likely to have issues that go unaddressed for longer. Working with a global it support company that has visibility across your entire estate — not just your UK headquarters — means unresolved alerts in any location are caught and actioned rather than sitting unnoticed.
7. Devices Outside Compliance Policy
Beyond patching, most endpoint management platforms allow you to define compliance policies — minimum OS versions, required security software, encryption status, screen lock settings, and so on. Track monthly how many devices are currently out of policy and why.
Non-compliant devices are one of the most common findings in security reviews of Microsoft 365 environments, and they’re directly relevant to your Zero Trust posture — a non-compliant device should ideally be blocked from accessing corporate resources until it meets your policy standards. If you’re using Intune for device management, your microsoft 365 support services london provider should be reviewing compliance policy adherence as a standard part of their monthly reporting.
8. Phishing Click Rate from Simulations
If you’re running phishing simulations — and you should be — the click rate from those simulations is one of the most direct measures of your human-layer vulnerability. Track this monthly or quarterly (depending on simulation frequency) and look for trends over time.
A declining click rate across multiple simulation cycles suggests your awareness training is working. A flat or rising rate suggests it isn’t, and that a different approach to training content or delivery is needed. Our article on endpoint hardening steps that reduce real-world attacks covers how the human awareness layer connects to technical endpoint hardening, and why both need to move together.
Phishing simulation results also give you useful segmentation data — which departments, roles, or locations are most susceptible? This allows you to target training where it’s most needed rather than applying the same programme uniformly regardless of demonstrated risk. If you don’t currently have a structured simulation programme in place, working with an anti phishing testing new york or London-based provider can help you design and run one that generates genuinely useful data.
9. Credential Exposure Detections
If you’re running dark web monitoring company services alongside your endpoint security — which you should be — track monthly how many credential exposure alerts have been raised, how quickly they were acted on, and whether the affected accounts showed any signs of unauthorised access before the credentials were reset.
This metric connects your endpoint security data to your broader identity and access risk picture. A month where three sets of credentials are flagged as exposed, all reset within 24 hours, and no unauthorised access detected is a very different situation from one where credentials sat unreset for two weeks before someone noticed. Tracking the former tells you your process is working. Tracking the latter tells you where it broke down.
10. Endpoint Security Coverage During Change Events
Finally, track whether endpoint security coverage is maintained during periods of change — new device deployments, platform migrations, office expansions, or staff onboarding spikes.
Change events are when gaps most commonly appear. A new device that isn’t enrolled in your management platform before it starts being used, or a migration that temporarily disrupts your monitoring during cutover, creates exactly the kind of window that attackers look for. If your platform migration services provider isn’t explicitly accounting for endpoint security continuity during migrations, that’s worth raising as a requirement rather than an assumption.
Turning Metrics Into a Monthly Review Process
Collecting these metrics is only useful if someone is reviewing them, interpreting what they mean, and driving action where things aren’t where they should be.
A monthly endpoint security review doesn’t need to be lengthy — 30 to 45 minutes with the right people and the right data is usually sufficient. The agenda should cover:
- Current status of each tracked metric against your target
- Trends over the past three months — is each metric moving in the right direction?
- Open items from last month’s review — have they been addressed?
- Any new issues or changes to the threat landscape that affect interpretation
The output should be a short list of actions with owners and dates — not a report that gets filed until the next review.
If you’re working with a managed IT provider, this review should be part of your account management cadence, not something you have to request separately. A provider operating as genuine global it support london based partner should be bringing this data to you proactively and helping you interpret it in the context of your business — not waiting for you to ask.
For businesses with multiple locations, make sure the review covers the whole estate. Our post on endpoint security for remote teams covers the specific visibility challenges that come with distributed workforces and how to make sure your metrics reflect what’s actually happening across all your locations, not just the ones that are easiest to see.
Our article on why EDR matters more than ever is also worth revisiting in this context — the case for modern endpoint detection tools is significantly strengthened when you have the measurement framework to demonstrate the value they’re delivering.
Frequently Asked Questions
How many metrics is too many to track monthly?
More isn’t always better. Eight to twelve well-chosen metrics, reviewed consistently, is more valuable than a 30-metric dashboard that nobody has time to interpret properly. Start with the highest-impact measures — coverage rate, patch compliance, MTTD, and MTTR — and add others as your measurement programme matures.
What’s a realistic target for patch compliance rate?
For operating system patches, a target of 95% or above within 14 days of release is a reasonable benchmark for most businesses. For critical security patches, the target should be higher and the window shorter — ideally within 72 hours of release. Application patch compliance targets will vary depending on the application and the level of exposure it represents.
Should I be tracking these metrics myself or should my IT provider be doing it?
Your IT provider should be tracking these metrics and reporting them to you as part of your managed service. Your role is to review the data, ask questions where the numbers don’t look right, and hold your provider accountable for driving improvement. If your provider isn’t providing regular, structured endpoint security reporting, that’s a gap in your service worth addressing.
What should I do if my coverage rate drops significantly in a single month?
Investigate immediately. A sudden drop in coverage typically means devices have come online without being enrolled, agents have been uninstalled or disabled on existing devices, or a provisioning process has broken down. Each of these has a different cause and a different fix — the common thread is that any unprotected device needs to be identified and remediated as quickly as possible.
How do phishing simulation results connect to endpoint security metrics?
Phishing simulations measure your human-layer vulnerability — the risk that a user will take an action that allows malware to reach an endpoint. If your click rate is high, it increases the probability that your endpoint security tools will need to catch something that a user has let through. Improving your phishing click rate and improving your endpoint detection capability are complementary goals, not alternative ones.
How do these metrics change when a business has offices in multiple countries?
The metrics themselves don’t change, but the reporting granularity needs to increase. Each location should be reported separately as well as in aggregate, so that regional gaps don’t get hidden in an overall figure that looks acceptable. Businesses with international offices often find that their weakest metrics are consistently in the same location — which usually points to a resourcing, process, or tooling issue in that specific office rather than a global problem.
Ready to Build a Stronger Endpoint Security Measurement Programme?
If you’re not currently tracking endpoint security metrics in a structured way — or if your provider isn’t delivering the reporting you need to have confidence in your security posture — it’s worth addressing that sooner rather than later.
Northern Star helps businesses across the UK and internationally build endpoint security programmes that are measurable, continuously improving, and genuinely aligned with business risk.
Get in touch with our team today — we’re happy to have a straightforward conversation about what you’re currently tracking, what you’re missing, and what better endpoint security reporting would look like for your business.