Does the Synthetic Full backup verify?


Author
Message
RDoc
RDoc
New Member
New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)
Group: Forum Members
Posts: 4, Visits: 17
I've not been able to find an answer for this.
What I want to do is use the forever incremental scheme to keep building synthetic full backup files. However, how do I know the created full file is valid? What I'd like to be able to specify is that both the synthetic full and incremental be verified before the next merge. That way I'd be sure that the new synthetic full was valid. The old one is good and the incremental is good.
jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)
Group: Forum Members
Posts: 13K, Visits: 79K
When you create a new backup, only the new backup itself is verified, not all preceding backups in the set on which it depends.  When a consolidation occurs, either between two Incrementals or in your case between an Incremental and a Full, Macrium performs verification of the data blocks involved in consolidation as part of the process itself.  So in that sense, the alterations are verified automatically.  If Reflect encounters corruption during that process, the consolidation fails, and Macrium has designed the consolidation process to allow recovery from scenarios like this, as well as scenarios like disconnecting the disk that contains the backups in the middle of the consolidation.  But of course even that verification during a consolidation doesn't necessarily mean that all OTHER data blocks within the Full that were left unchanged are still readable and intact.  However, that risk would exist even in backup strategies that do not involve consolidation, since there's no option to verify the entire backup chain all the way back to the root Full every time you create a new backup.

The best risk mitigation strategy here is to back up to a rotation of disks.  Not only will that give you multiple backup sets, which are always worth having anyway, but it will keep them on separate physical devices.  A true Incrementals Forever strategy only ever involves a single set at a given destination, and for that reason I only ever recommend using it when there will be a destination rotation of some kind involved, or at the very least a replication routine to some alternate location so that you have multiple copies of the same set.  But replication can be an issue when dealing with Synthetic Fulls because you have to replicate the updated Full every single time, which is why when dealing with replication I typically opt for an Incremental Merge strategy rather than Synthetic Fulls and just schedule new Fulls to be created on some sort of regular basis.  But if you won't even be doing replication and instead plan to have a single copy of a single backup set, I would not consider that a sufficiently resilient backup solution, because no matter how frequently you verify your backups, you will never know for sure that your backups will STILL be in good shape whenever you might actually need to restore from them, and in situation where you only have one copy of a single set, a problem with your single Full could render all of your backups useless, even if the problem is just with some of the disk sectors that are storing a portion of that file rather than with the file itself.

Edited 4 March 2021 5:11 PM by jphughan
RDoc
RDoc
New Member
New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)
Group: Forum Members
Posts: 4, Visits: 17
jphughan - 4 March 2021 4:57 PM
When you create a new backup, only the new backup itself is verified, not all preceding backups in the set on which it depends.  When a consolidation occurs, either between two Incrementals or in your case between an Incremental and a Full, Macrium performs verification of the data blocks involved in consolidation as part of the process itself.  So in that sense, the alterations are verified automatically.  If Reflect encounters corruption during that process, the consolidation fails, and Macrium has designed the consolidation process to allow recovery from scenarios like this, as well as scenarios like disconnecting the disk that contains the backups in the middle of the consolidation.  But of course even that verification during a consolidation doesn't necessarily mean that all OTHER data blocks within the Full that were left unchanged are still readable and intact.  However, that risk would exist even in backup strategies that do not involve consolidation, since there's no option to verify the entire backup chain all the way back to the root Full every time you create a new backup.

The best risk mitigation strategy here is to back up to a rotation of disks.  Not only will that give you multiple backup sets, which are always worth having anyway, but it will keep them on separate physical devices.  A true Incrementals Forever strategy only ever involves a single set at a given destination, and for that reason I only ever recommend using it when there will be a destination rotation of some kind involved, or at the very least a replication routine to some alternate location so that you have multiple copies of the same set.  But replication can be an issue when dealing with Synthetic Fulls because you have to replicate the updated Full every single time, which is why when dealing with replication I typically opt for an Incremental Merge strategy rather than Synthetic Fulls and just schedule new Fulls to be created on some sort of regular basis.  But if you won't even be doing replication and instead plan to have a single copy of a single backup set, I would not consider that a sufficiently resilient backup solution, because no matter how frequently you verify your backups, you will never know for sure that your backups will STILL be in good shape whenever you might actually need to restore from them, and in situation where you only have one copy of a single set, a problem with your single Full could render all of your backups useless, even if the problem is just with some of the disk sectors that are storing a portion of that file rather than with the file itself.

Thanks for the answer, unfortunately, that was pretty much what I expected it would do.

Since I'm using this system for backups, not archiving, it seems to me that allowing a pre-consolidation verify would be valuable. If it failed, that would mean that all the incrementals weren't usable since the full they were based on wasn't good. The solution is simple though, just make another full from the original files, verify it after creation, and restart the incremental process as a new chain. The corrupt full and its chain of incrementals would be lost, but if it's just used for backup, you've got the originals to start the new chain from.

The problem with not doing the verify on the full before merging is that if it becomes corrupted by bit rot or whatever, you'll never know it until you try to do a recovery. Assuming the consolidation happens every day or so, that would also mean that the merged full backup would be scanned quite often. So, not being able to get to a file would require losing both the original and backup integrity almost simultaneously. That seems low probability except for physical damage like a fire which requires off site storage.
jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)
Group: Forum Members
Posts: 13K, Visits: 79K
I'm not sure why you're invested in the idea of a pre-consolidation verification.  If the consolidation fails due to a problem that would have been found by verification, then the backups involved in the consolidation are still recoverable.  You aren't left with a useless Full as a result of a failed consolidation -- or at least not one that's become any LESS useful because the consolidation was attempted.  I've personally seen multiple consolidations fail at client sites because the on-site employee responsible for swapping the backup disks disconnected one while a consolidation was still running, and Reflect was able to recover from every single one of those cases.  And if you have a problem with the backups, then you'll have that problem regardless of whether you discovered it as a result of a pre-consolidation verification or during the consolidation itself.

It would seem to me that if anything, a post-consolidation verification would be preferable.  That way you're checking the end state of the backups involved rather than checking prior to making a major change to the files, and if you're making backups on a daily basis, then the time gap of verification after the previous day's consolidation compared to just before the current day's consolidation doesn't seem significant.  But if you want something like this, it wouldn't be all that difficult to script.  You could use Reflect's built-in ability to generate a PowerShell script that will call your definition file, and then customize that so that after the backup completes, all backups in the set will be verified, or just the Synthetic Full if that's all you want to check -- though that will increase your risk unless you're willing to accept the possibility of not being able to use all backups in your Incremental chain.  I'd be happy to write a script snippet that achieves either of these things if you'd like.

Edited 4 March 2021 6:52 PM by jphughan
RDoc
RDoc
New Member
New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)
Group: Forum Members
Posts: 4, Visits: 17
jphughan - 4 March 2021 6:45 PM
I'm not sure why you're invested in the idea of a pre-consolidation verification.  If the consolidation fails due to a problem that would have been found by verification, then the backups involved in the consolidation are still recoverable.  You aren't left with a useless Full as a result of a failed consolidation -- or at least not one that's become any LESS useful because the consolidation was attempted.  I've personally seen multiple consolidations fail at client sites because the on-site employee responsible for swapping the backup disks disconnected one while a consolidation was still running, and Reflect was able to recover from every single one of those cases.  And if you have a problem with the backups, then you'll have that problem regardless of whether you discovered it as a result of a pre-consolidation verification or during the consolidation itself.

It would seem to me that if anything, a post-consolidation verification would be preferable.  That way you're checking the end state of the backups involved rather than checking prior to making a major change to the files, and if you're making backups on a daily basis, then the time gap of verification after the previous day's consolidation compared to just before the current day's consolidation doesn't seem significant.  But if you want something like this, it wouldn't be all that difficult to script.  You could use Reflect's built-in ability to generate a PowerShell script that will call your definition file, and then customize that so that after the backup completes, all backups in the set will be verified, or just the Synthetic Full if that's all you want to check -- though that will increase your risk unless you're willing to accept the possibility of not being able to use all backups in your Incremental chain.  I'd be happy to write a script snippet that achieves either of these things if you'd like.

Assume there's been an error in the full backup before the consolidation. If the post consolidation verification just uses the damaged full file incremental file contents for the checksum, the validation will check as correct, assuming the new consolidation went OK, because that's the state of the files even though the contents are damaged. However, if the post consolidation validation first does a checksum of the entire full backup, and compares that to the previously saved checksum, it will see that the full backup contents are damaged.

As it is, whatever security was gained from the verification is lost when a consolidation is done, so why not have that as part of the operation if people select validate?

Thanks for the scripting offer but I've done a lot of scripting for other things so I'll set it up. It just seems like something that would be a useful option
Edited 4 March 2021 8:18 PM by RDoc
jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)
Group: Forum Members
Posts: 13K, Visits: 79K
There isn't one checksum for the entire backup.  Verification and consolidation have been discussed in detail here, with significant input from Macrium developers.  There are individual checksums for blocks of either 4KB or 16KB, if memory serves.  So if you have a Full that contains damage in an area that was NOT altered during consolidation, that damaged section will not suddenly become treated as the correct new state of that data as a result of the consolidation operation.  That damaged area would still be identified as such on a verification of the entire Full.  And if the damaged block is encountered during consolidation, then it will be called out during that process, not ignored.

Edited 4 March 2021 8:37 PM by jphughan
RDoc
RDoc
New Member
New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)New Member (5 reputation)
Group: Forum Members
Posts: 4, Visits: 17
jphughan - 4 March 2021 8:35 PM
There isn't one checksum for the entire backup.  Verification and consolidation have been discussed in detail here, with significant input from Macrium developers.  There are individual checksums for blocks of either 4KB or 16KB, if memory serves.  So if you have a Full that contains damage in an area that was NOT altered during consolidation, that damaged section will not suddenly become treated as the correct new state of that data as a result of the consolidation operation.  That damaged area would still be identified as such on a verification of the entire Full.  And if the damaged block is encountered during consolidation, then it will be called out during that process, not ignored.

Well that would be fine too if Macrium would do the verification after the consolidation. I've not been able to find a way to do that other than scripting though.
jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)Macrium Evangelist (21K reputation)
Group: Forum Members
Posts: 13K, Visits: 79K
To my knowledge, Reflect doesn't currently offer a built-in mechanism to verify the entire backup chain after a backup.  The Auto-Verify option only verifies the new backup, and I don't know of any other option to perform any other type of verification as part of a backup job.

Beardy
Beardy
Expert
Expert (815 reputation)Expert (815 reputation)Expert (815 reputation)Expert (815 reputation)Expert (815 reputation)Expert (815 reputation)Expert (815 reputation)Expert (815 reputation)Expert (815 reputation)Expert (815 reputation)
Group: Forum Members
Posts: 640, Visits: 2.4K
Don't know if this will help or not, but there's a stand-alone utility to verify backups from the command line which is detailed here: https://knowledgebase.macrium.com/display/KNOW72/Verifying+image+and+backup+files+from+the+command+line

Would be pretty easy to script something using that either pre or post backup & perhaps trigger a full if the exit status is 1

Not sure I'd worry about it myself, my experience is verification errors seem to find bad RAM or cabling issues more often than actual damaged or corrupt backups.
As such I think auto-verify is probably sufficient by itself.

If I were setting something of the sort up I think I agree with @jphughan that post consolidation would be the time to check & trigger a new full immediately if the check failed, would give you the shortest time window with a backup that had failed to verify.
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Login

Explore
Messages
Mentions
Search