The four Ms of data loss — and how to recover with confidence
If you’re responsible for your organization’s data — whether in IT, security, compliance, or ops — there’s a good chance you’ve already dealt with some form of data loss. If not, odds are you will. The key is understanding how those losses happen and how to recover.
In my experience, most incidents fall into one of four categories — I call them the four Ms: Malicious attacks, mistakes by admins, mishaps at your cloud provider, and migrations gone bad.
Let’s take a look at each. Along the way, I’ll share real examples, the patterns I’ve seen over and over again, and what you can do to make sure you’re ready when (not if) something goes wrong.
Malicious attacks: You will be targeted
Let’s start with the one everyone knows: cyberattacks. You’ve seen the headlines — ransomware, data-wipes, stolen credentials, you name it. But what doesn’t always make the news is how modern attacks are increasingly hybrid and decreasingly targeted.
We’re no longer dealing with isolated ransomware gangs. Today’s attacks are more coordinated, more hybrid, and less targeted than ever. Nation-state actors like MERCURY (now Mango Sandstorm) and DEV-1084 (now Storm-1084) have proven they can compromise on-prem environments, escalate privileges, and then pivot into cloud systems like Azure — where they delete Azure-based backups and try to erase the recovery path itself. That’s right: They don’t just go after your data, they go after your recovery plan.
These aren’t theoretical. Microsoft’s Threat Intelligence blog and others have published chilling case studies on how hybrid attackers operate — and how hard they are to stop once inside.
Attacks also don’t need to be that sophisticated to be devastating. Many start with a user doing something they shouldn’t — clicking a phishing link or exposing credentials. It’s not intentional, but it’s all an attacker needs to get in, elevate access, and start deleting data.
Other times, it’s far more advanced. A state-sponsored actor might compromise your local AD, escalate access, and pivot into Azure using a synced identity. From there, they target and delete your backups. Not only is your operational data gone — your safety net is too.
You might read this and think “well, no nation-state would target us,” but the sad fact is that attack technology always trickles downward. What takes a sophisticated team of experts today can be done by a ransomware gang next week and by a run-of-the-mill ankle-biter next month. As the ransomware market expands, and criminals compete with each other more, they’re being much less discriminating about who they attack, which increases the odds that you’ll get hit. An untargeted attack can do just as much damage to your business as one specifically aimed at you.
Recovery tip: You can’t assume your backup is safe just because your data is in the cloud. Backups are often the first thing targeted — here’s why backups are targeted. That means you need copies in a location your attacker can’t reach. Backups need to be immutable, isolated, and independent of your production systems. It’s not enough to say you have a backup. You need to know you can get it back when it counts.
Mistakes by admins: The most common cause of data loss
We all make mistakes. I’ve been in this field long enough to say that with confidence — even great admins on a good day can misconfigure something. The problem is, with today’s systems, small changes can have major ripple effects.
Retention policies are a great example. Someone misconfigures a retention policy and sets it to 9 days instead of 90. Or a PowerShell script gets deployed with the wrong scope and clears out a folder structure. These are honest mistakes, but they carry real consequences.
Then there are more complex cases — like the major U.S. bank that trusted its SaaS provider’s default retention policy. There was a bug in the logic. The result? Federally mandated records were deleted. By the time anyone realized, the recovery window had passed, and the bank’s risk committee had to be notified.
No matter how well-trained your admins are, in a world where every IT team is under crushing pressure to do more, faster, with less, mistakes are guaranteed to happen.
Recovery tip: Your backup strategy has to account for people. The good ones, the tired ones, the well-meaning ones who just made a bad change — and the ones who might mean harm. That means external, versioned backups you can access independently — even if someone on your own team made a critical change or deleted something maliciously. And more than that, it’s about building trust and a strong security culture. People need to feel comfortable admitting when something went wrong, before it escalates.
Mishaps at your cloud provider: The shared responsibility reality
Even the biggest cloud providers have bad days. In September 2024, Microsoft lost weeks of security logs for some customers due to a bug in their internal monitoring agents. Earlier, Google Cloud deleted critical pension data of one of Australia’s largest pension providers due to a misconfiguration of the Google Cloud VMware Engine (GCVE). The customer had no way to get it back through Google, but they fortunately had their own third-party backup in place.
And these mishaps aren’t rare. These kinds of failures may be complex, but they’re not impossible. If your DR plan assumes your cloud vendor won’t mess up — or that they’ll be able to fix any problems they cause — you’re gambling.
Recovery tip: Shared responsibility means your vendor protects the infrastructure — not your data. They essentially promise not to lose all of your data at the same time—not to help you recover if you lose all your data at the same time. If something gets deleted, overwritten, or lost due to their error (or yours), it’s your responsibility to recover. That’s why independent backup, stored off-platform and regularly tested, is so important.
Migration gone bad: Underestimated and over-impactful
Migrations should be straightforward — but they rarely are. They’re a little like home renovations in that they always take longer than expected, cost more than planned, and something breaks along the way.
In larger transitions, like moving from one cloud provider to another, things can go completely sideways. A large EU retailer migrated to Google Cloud and experienced serious sync and data integrity issues. They didn’t have a rollback plan. Their recovery hadn’t been tested. They were stuck.
We like to think of migrations as upgrades. But they’re also risk windows — times when data is in transit, systems are shifting, and safeguards are at their weakest.
Recovery tip: Treat migrations like disaster scenarios. You need complete, point-in-time backups of everything critical before you cut over. And you need to test recovery as part of the migration plan. If you don’t, you might find yourself restoring yesterday’s lunch menu while your billing system stays offline.
Final thoughts: Plan like it’s going to happen, because it will
There’s a recurring theme in every scenario I’ve laid out: testing. Not theory. Not a spreadsheet. Actual, practiced, verifiable recovery testing.
It’s not enough to say you have a disaster recovery plan. You need to prove it works — to yourself, to your team, and maybe even to regulators. That’s where real resilience comes from. Not from wishful thinking, but from preparation.
You can’t predict every attack. You can’t prevent every mistake. And you can’t control what your cloud vendor does. But you can control how you prepare, and how quickly you bounce back.
So test your plan. Test it again. And if it fails, fix it now — not during an actual incident.