
If you’re using Microsoft 365, Google Workspace, Salesforce, or any other SaaS platform, you’re already “in the cloud”. The problem is, a lot of businesses quietly assume that means their data is automatically safe and easily recoverable.
Then something goes wrong — a compromised admin account, a rushed employee deletion, a sync tool overwriting files, or a ransomware incident — and the real panic starts during recovery. Not because you didn’t have a backup, but because the backup you had didn’t restore what you expected.
That’s why cloud-to-cloud backup matters. It’s designed to copy data from 1 cloud platform to another secure location, so you’re not relying purely on in-platform retention, recycle bins, or “it should be fine” thinking. If you’re already investing in solid cloud operations via Cloud Services / Office 365, these are the recovery mistakes worth avoiding.
The stakes are real: the UK Government’s Cyber Security Breaches Survey 2025 found 43% of businesses reported a breach or attack in the last 12 months, and the average cost of the most disruptive breach was £1,600 (or £3,550 when excluding £0 cost incidents).
1) Assuming Microsoft/Google “backs everything up”
SaaS providers protect their platforms, but that’s not the same as protecting your data against every scenario. Versioning, retention, and recycle bins help — until they don’t (especially with admin deletes, retention policy changes, or time passing).
Fix: Treat cloud-to-cloud backup as your independent safety net, separate from day-to-day productivity tooling. If you want to align this with a bigger resilience plan, tie it into a practical IT Consultancy Services review.
2) Only backing up mailboxes and ignoring the messy stuff
The data that tends to hurt most during recovery is often the “new” cloud sprawl: Teams chat history, SharePoint sites, OneDrive folders, permissions, channel files, Planner, and shared mailboxes.
Fix: Make sure your backup scope matches how your staff actually work — not how IT thinks they work. (Northern Star regularly supports these environments through Platform to Platform Migration projects, so it’s worth mapping your real-world usage properly.)
3) Misconfigured retention that quietly deletes your backup copies
This is a surprisingly common one: you have backups, but retention is too short, or it’s set per workload incorrectly. So the backup exists… until it doesn’t.
Fix: Decide your recovery needs first, then configure retention around them:
- Operational recovery (accidental deletions): often days/weeks
- Security incidents (compromise/ransomware): often months
- Legal/regulatory needs: potentially years
If you’re building this into a broader continuity approach, the thinking overlaps heavily with Why Your Business Needs a Business Continuity Plan (BCP).
4) Not protecting backups from the same admin accounts that got compromised
If an attacker gets global admin, they can delete users, wipe data, and sometimes sabotage recovery settings. If your backup tool is managed with the same credentials and no extra controls, it can become part of the blast radius.
Fix: Follow ransomware-resilient principles: separate admin roles, strong MFA, restricted access, and backup immutability where possible. The NCSC specifically highlights designing backups to resist destructive actions and ensuring recoverability under attack conditions.
For a wider security posture (beyond backups), review Security Services and how account security is handled end-to-end.
5) Never testing restores (until it’s too late)
Backups fail in subtle ways: API throttling, permissions errors, partial backups, missing metadata, or jobs silently skipping workloads. You don’t want to discover that mid-incident.
Fix: Test restores regularly:
- 1 mailbox
- 1 SharePoint site
- 1 OneDrive
- 1 Teams channel (including files and permissions)
- A “deleted user” scenario
If you want a real-world view of how attackers behave and what they target (including cloud access), pair this thinking with The Importance of Penetration Testing in Cybersecurity.
6) Restoring data “on top” of live data and overwriting good content
In the stress of recovery, it’s easy to restore into the original location and overwrite newer, valid work — especially when you’re recovering a large folder structure.
Fix: Use safe restore patterns:
- Restore to a separate location first (a “quarantine” site/folder)
- Validate content with the business owner
- Only then merge back into production
This is the same mindset Northern Star uses in change-heavy work: plan, stage, validate, then cut over — the basics behind smooth Services delivery.
7) Forgetting permissions and sharing links are part of “your data”
A restore that brings files back but breaks access rules, sharing links, or group memberships is a recovery that causes fresh disruption. People can’t work, and your helpdesk gets hammered.
Fix: Confirm your backup captures:
- Permissions
- Group membership mapping
- Sharing links (where supported)
- Site structures and metadata
If you’re not sure whether your current cloud setup is clean enough to restore reliably, it may be time for a broader review through Why Us? and the way Northern Star manages IT as an extension of your team.
8) Ignoring third-party SaaS data entirely
You might have Microsoft 365 backups nailed down, but what about HR platforms, CRM, finance tools, e-sign services, project tools, and ticketing systems? Many businesses only realise what’s missing when they try to rebuild after an incident.
Fix: List your critical SaaS platforms and rank them by:
- Revenue impact
- Legal/compliance impact
- Operational dependency
Then decide what gets backed up, exported, or mirrored.
9) Treating backups as “the ransomware plan”
Backups are vital, but they’re not the full answer. If the same phishing attack hits 3 users, your cloud data can be encrypted or mass-deleted fast — and recovery is slower if you don’t have detection and response in place.
Fix: Combine backup with modern security controls such as endpoint detection, credential monitoring, and user awareness. Northern Star covers these angles in:
- Endpoint Security That Pays Off
- How to spot a Phishing Email
- The Crucial Role of Dark Web Monitoring for Stolen Company Login Credentials
10) Underestimating recovery time (and the real £ cost of downtime)
Even a “small” data loss event can cost serious money once you factor in staff time, disruption, and lost trading time. UK breach data already shows costs stack up quickly, even before you get to reputational damage.
Fix: Define realistic targets:
- RPO (how much data you can afford to lose)
- RTO (how quickly you need it back)
Then check whether your backup approach can actually meet those targets during a busy, messy incident.
A quick recovery-safe checklist
If you want a simple standard to aim for, you should be able to answer “yes” to these:
- You can restore a single file, a folder, and an entire site/mailbox
- You can restore a deleted user’s data even after licensing changes
- Backups are protected from admin compromise and destructive actions
- Restore tests are scheduled (not ad hoc)
- Permissions and structures restore properly
- You know your RPO/RTO and they’re realistic
Next Steps
If you want confidence that your cloud-to-cloud backups will actually work when it matters — and that you won’t be caught out by 1 of these common recovery mistakes — speak with Northern Star via their Contact page, or start by reviewing their Cloud Services / Office 365 and Security Services options.