Group: Forum Members
Posts: 14K,
Visits: 83K
|
I know what you mean that "synthesizing" a new full backup every time an incremental backup runs takes quite a bit of time (1-2 hours in my case since my disk rotation means there's at least a week between backups to a given disk), but it seems that the desire to consolidate more than one incremental backup at a time is diametrically opposed to the concept of retention. For example, I have my retention rules set to keep 8 incremental backups, giving me weekly backups for up to 2 months on most of my disks. If Reflect had an additional option where I could say (for example), "Consolidate 3 backups at a time", doing so would either cause me to drop below having 8 Incremental backups, thereby reducing how far back I can go for restores when using synthetic fulls, or else Reflect would have to ignore the retention rule long enough to generate an excess of Incrementals so that "multi-consolidation" would still leave me with the number of backups I specified in my retention policy, thereby consuming additional disk space and creating confusion as to why retention rules are seemingly not being enforced correctly at all times. I don't think you can have it both ways.
Even if you're suggesting a feature that says, "Keep a minimum of 8 backups, but allow up to 16 backups to accumulate, then consolidate 8 at a time to drop back down to 8", I don't think you'd save any time overall. You'll still be consolidating the same number of backups overall, so all you'd be doing here is saving time at Time A by deferring consolidation in exchange for having to budget that much more time at Time B for a "consolidated consolidation". Judging by your request, it sounds like you want your backups very current and performed very quickly, but DON'T care as much about keeping a long history or having a lot of granularity to choose precisely where in that history to restore from. In that case, one option you could consider would be "Incrementals Forever without synthetic fulls", in which case Reflect will only merge individual incrementals into each other, with the Full kept frozen in time. This will result in much faster merging, but the caveat is that for data captured in the Full, subsequently deleting it from the source will never result in that storage being reclaimed on your backup target since that data will be frozen in that Full -- but that can of course be resolved by manually taking a new Full every now and then. Or if you want to get clever, you could automate this by creating two schedules for your definition file. The first would be a monthly Full, retaining however many you want; you may want to set Reflect to run its retention purge after the backup in your case if you only want 1 Full but have enough free space to temporarily house 2 to avoid purging before a new backup succeeds. The second would be a daily "Incrementals Forever without synthetic fulls", retaining 30 Incrementals (or less if you don't need one for every day of the month). That way the Incrementals Forever schedule will only ever merge older Incrementals into newer Incrementals, leaving the Full alone, which minimizes merge time, and when the Full job executes, it will set a new baseline to recover storage from data deleted since the last Full and (if configured) purge previous Fulls and all of their child Incrementals.
Also, if you haven't already, enable Delta incremental indexing, which will cut down on incremental file sizes (especially if you use a frozen Full) and save time creating synthetic fulls if you stick with them.
|