Full backup doesn't shrink


Author
Message
Octopuss
Octopuss
New Member
New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)
Group: Forum Members
Posts: 24, Visits: 65
I have a backup set so that there is one full and about six incrementals scheduled to run on every computer start with synthetic full option being checked.
At some point there was like 100GB worth of extra data backed up which has since been deleted, but the main full backup stays the same size despite being synced with several incrementals where the extra files are no longer present several times.
What gives? When I mount the backup the files aren't even there, yet the file is still twice as big.

jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)
Group: Forum Members
Posts: 6K, Visits: 45K
That’s by design. Excess space in the Full is kept as scratch space. This is actually a pretty typical design that’s also used in many database engines, where the underlying database file never shrinks even if the data set shrinks. And in order to make the consolidation process non-destructive while it’s underway so that the original state of the backup can be recovered if the process fails partway through, the first time you consolidate into a Full, it will grow by basically the size of the Incremental you consolidated. That avoids the need to overwrite the existing data while the consolidation is occurring, again so that if the consolidation fails, all of your original data is still there. After the consolidation succeeds, any data blocks that are no longer needed get marked as scratch space, and they’re eligible to be overwritten during the NEXT consolidation.

Macrium has said that in theory they could offer a defrag/compacting utility that would optimize data within the Synthetic Full and allow shrinking it down, but they’ve also noted that this process would involve quite a bit of time and disk activity to perform.
Edited 11 February 2020 1:09 PM by jphughan
Octopuss
Octopuss
New Member
New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)
Group: Forum Members
Posts: 24, Visits: 65
I understand the reasoning, but with NAS storage space being relatively limited, this is potentially a problem, because if I temporarily add say 500GB to the disk and delete it a few days later, the backup will pointlessly occupy 500GB worth of ghost data until I entirely delete it.

It seems like synthetic full is not the way to go for me, so is there a way to create normal full-incremental chain where the full backup will be created after every x incrementals WHILE having the backup start when I boot into Windows?

jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)Macrium Evangelist (8.8K reputation)
Group: Forum Members
Posts: 6K, Visits: 45K
If your scenario involves significant reductions in amount of source data and your backup destination capacity is limited, then no strategy that involves consolidation would seem to be a good fit — but NOT consolidating of course retains the old backups that contain that now-deleted data too, until you purge the entire backup. So on that subject, if you’re creating new Fulls on a regular basis as opposed to running an Incrementals Forever strategy, that gives you the opportunity to purge old Fulls that may have become unnecessarily large as a result of consolidation. So maybe just capturing Fulls more often and purging the older ones sooner would work for you? The drawback there is that you can’t go back as far in time anymore. But if you’re already at a point where you want data that you deleted on your source to be purged from your backups, that might not be an issue for you.
Octopuss
Octopuss
New Member
New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)New Member (44 reputation)
Group: Forum Members
Posts: 24, Visits: 65
I guess I'll just manually trigger a full when needed.

GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Login

Explore
Messages
Mentions
Search