Table of contents
- Backup as a ritual, not a strategy
- Backups encrypted by ransomware: when the copy becomes an accomplice
- Restores that take days instead of hours
- Misconfigured immutable backups: security in name only
- The blind spot: no one tests restores under realistic conditions
- The false sense of security as an organizational risk
- Restore testing: from exceptional event to operational routine
- The real value: finding problems before the attack
- Backups are not insurance, they’re a capability
In the world of cyber security, there is a belief as widespread as it is dangerous: “We have backups, so we’re safe.”
It’s a sentence that reassures managers, IT teams, and boards. A sentence that shuts down discussions, postpones investments, and silences doubts.
Yet in practice, it is one of the most expensive illusions an organization can nurture.
This article comes from a very specific editorial angle: the false sense of security.
Because a backup, by itself, saves no one. What really saves you is a tested, measured, repeatable restore. Everything else is theory.
Backup as a ritual, not a strategy
In many organizations, backup is treated as an administrative obligation rather than a critical resilience process.
A solution is installed, automatic jobs are scheduled, green dashboards are checked, and the topic is filed away as “covered.”
The problem is that backups are verified only on write, almost never on read.
- The job completed successfully? ✔
- The files are present? ✔
- Storage reports no errors? ✔
But no one really asks:
“If we had to restore everything tomorrow, how long would it actually take?
And would it really work?”
Backup becomes a comforting ritual, not a survival strategy.
Backups encrypted by ransomware: when the copy becomes an accomplice
One of the most common (and most ignored) scenarios in modern ransomware incidents is this:
the backup exists, but it is encrypted along with production data.
How does this happen?
- Online backups permanently connected
- Backup credentials identical to or accessible from the domain
- No real separation between production and protection
- No control over who can delete or encrypt backups
Modern ransomware doesn’t stop at servers.
It actively hunts for:
- backup repositories
- snapshots
- secondary volumes
- management consoles
And when it finds them, it treats them like everything else.
The result is devastating:
the organization discovers it has perfectly useless backups, encrypted with the same key as the primary infrastructure.
At that moment, the phrase “we have backups” becomes a mockery.
Restores that take days instead of hours
Another major myth is that restores are “automatically fast.”
In reality, restore speed is almost always an unpleasant surprise.
During a real incident, issues emerge that no document ever anticipated:
- serial restores instead of parallel ones
- network bottlenecks
- storage too slow for the real data volume
- undocumented application dependencies
- services that require a precise startup order
What was supposed to take 2 hours on paper ends up taking 48 or 72 in reality.
And this is where the real damage begins:
- ERP systems down
- production halted
- billing suspended
- customers lost
- reputation damaged
The backup exists, but it does not meet the declared RTOs.
And an unmet RTO is, in effect, a failed continuity plan.
Misconfigured immutable backups: security in name only
In recent years, immutable backups have become a kind of magic word.
WORM, object lock, retention policies: everything suddenly sounds “ransomware-proof.”
But immutability is not a switch. It’s a delicate configuration.
The most common mistakes:
- retention periods that are too short
- deletion rights granted to administrative accounts
- synchronous replication that propagates destruction
- restore tests never performed on immutable data
- exposed or poorly segregated management consoles
The paradoxical result:
backups declared immutable that disappear or cannot be restored when they are actually needed.
Because no one ever verified what happens from the restore perspective, not the theoretical protection one.
The blind spot: no one tests restores under realistic conditions
The real problem isn’t technical. It’s cultural.
Restores are:
- slow
- inconvenient
- disruptive to environments
- coordination-heavy
- revealing of errors
So they get avoided.
When they do happen, they’re often:
- limited to single files
- run in unrealistic test environments
- not timed properly
- disconnected from complex applications
But a restore that doesn’t simulate a real incident is worthless.
A proper restore test should answer uncomfortable questions:
- How long does it really take?
- Who needs to be involved?
- Which systems depend on others?
- What doesn’t restart automatically?
- Which data ends up corrupted or incomplete?
As long as these questions remain unanswered, the backup is just a promise.
The false sense of security as an organizational risk
The most serious damage caused by untested backups isn’t technical.
It’s decision-making related.
An organization convinced it is protected:
- invests less in prevention
- underestimates ransomware risk
- doesn’t train its teams
- doesn’t prepare communication plans
- doesn’t rehearse crisis scenarios
When the incident hits, the shock is doubled:
- the attack itself
- the realization that the plan doesn’t work
That’s when panic, improvisation, and bad decisions take over.
Restore testing: from exceptional event to operational routine
Testing restores should not be an extraordinary event.
It should become a structured operational routine.
This doesn’t mean restoring everything every week, but:
- periodic tests on realistic datasets
- simulations of total data loss
- objective measurement of recovery times
- up-to-date documentation
- involvement of IT, security, and the business
Restore testing is not a useless cost.
It is the only way to turn backups into real resilience.
The real value: finding problems before the attack
Every failed restore test is actually good news.
Because it exposes a problem before a criminal does.
It’s far better to discover today that:
- the backup is slow
- the procedure is incomplete
- dependencies are unclear
than to discover it at 3 a.m., with systems encrypted and the CEO on the phone.
Backups are not insurance, they’re a capability
A backup is not automatic insurance.
It is an operational capability that must be trained like any other.
Until restores are tested:
- backups are an illusion
- security is theoretical
- resilience is only declared
The right question is not:
“Do we have backups?”
But:
“Are we really able to recover, when everything goes wrong?”
If you’ve never tested your restore, the answer is probably no.