Estimate backup size before starting backup


Author
Message
csadam
csadam
New Member
New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)
Group: Forum Members
Posts: 3, Visits: 19
I had a very annoying experience today. I have scheduled daily incremental backups. I've added a lot of stuff today to my disks, and the usual backup now it took hours.
And after 2 hours it was just aborted with "No space on destination." error. All the effort to create the backup was just wasted time.

So it would be good to add a feature that estimates the backup size and alerts if it is clearly more than the free disk space on the target drive, before starting the backup.

Tags
dbminter
dbminter
Macrium Hero
Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)Macrium Hero (2.3K reputation)
Group: Forum Members
Posts: 1.7K, Visits: 18K
While the estimate can be widely off, especially given the compression used and what is being possibly compressed, a good thumbnail value of the expected target size AND checking for available free space amount before a backup starts sounds like a solid suggestion to me. 


+1

jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)
Group: Forum Members
Posts: 7.9K, Visits: 55K
@dbminter Your idea of a "ballpark size" might be feasible for use cases involving CBT, but for non-CBT cases, Reflect won't even know upfront how much source data it will be backing up, never mind how well it will compress.  It will know which files contain changed blocks, but it won't know which blocks or even how many blocks have changed until it scans that file, which is a process that runs throughout the backup job.

But even if a ballpark estimate were possible in all cases, what would happen with scheduled backups occurring on unattended systems like servers?  If Reflect determined that the backup might end up being too large given the available capacity, then it can't just sit there doing nothing until somebody decides to check a system that might not normally receive frequent interactive logons.

If Reflect failed with that error, it means the user either disabled the option to purge older backup sets when capacity drops below a certain level -- which is designed to help with this scenario -- or else there was only a single backup set at the destination and therefore there was no older set to purge -- but unless you also have backups stored somewhere else, only having a single set is a risky strategy anyway.  But if the user disables the disk space threshold purge that's designed to prevent jobs from failing due to insufficient capacity, then I would argue that the burden is on the user to ensure that their retention policy is appropriate for their backup destination capacity.  If you were unable to create an Incremental, even an unusually large one, then you might not be leaving yourself sufficient reserve capacity. 

Edited 6 August 2020 10:58 PM by jphughan
csadam
csadam
New Member
New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)
Group: Forum Members
Posts: 3, Visits: 19
@jphughan Well, then maybe I'm doing something wrong. I have 3 internal drives in my machine: 256GB + 1TB + 2TB. Currently is about total 800GB free.
I've created an image backup contains all of my partitions. I'm doing an Incrementals Forever strategy.
My settings are:

The target drive is a 4TB external disk. Currently the target drive contains a 2TB initial full backup and a bunch of incrementals in 1-10 GB size. There is 650GB free on the target drive.
It seems to me there is enough space but it still can't finish the backup.
Do you have any suggestions to check, apart from buying bigger backup drive?  Anyway, what is a recommended target disk size if I want to store 60 days history of 3TB data?




jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)
Group: Forum Members
Posts: 7.9K, Visits: 55K
If you're using Incrementals Forever and you only have a single Full, then the free space threshold purge option can't be used.  If your Incrementals are typically only 1-10 GB in size, I can't imagine how you had a problem creating a new one if you have 650 GB free.  Was this new Incremental really going to be 65-650x larger than your average Incremental?  Can you post the log from the job that failed?  (Just make sure NOT to include your Reflect license key that is included at the bottom of each log.)

On a side note, are you either using a destination disk rotation or at least replicating these backups anywhere else?  If not, you have a pretty high-risk backup strategy.  First you have the general risk of only having a single backup set (i.e. only one Full backup), which means that if your Full ever becomes corrupted or partially unreadable due to an issue on your destination disk, then all of your backups are rendered useless.  That's why I typically only recommend Incrementals Forever with Synthetic Fulls to people who are using a disk rotation so they have multiple backup sets in total even if each disk only has one set, or else are at least replicating backups somewhere.  Either of those strategies also protects you from losing of your backups due to a failure of your disk, which is a nice additional benefit regardless of your backup strategy.

And then secondly, 60 Incrementals is a pretty long chain.  There's nothing technically wrong with that when everything is working well, but it means that restoring Incremental #60 requires that all 59 previous Incrementals and the parent Full all be intact.  The longer the dependency chain, the greater the risk.

If you don't plan on using a disk rotation or replication, then I would strongly recommend that you adjust your strategy so that you can store at least 2 Full backups, perhaps by creating a new Full each month.  Of course if your Fulls are 2TB, then a 4TB disk isn't enough to store those plus 60 Incrementals.  Another option you could consider would be something along the lines of monthly Fulls, weekly Diffs, and daily Incs.  That would give you two benefits.  First, your Incremental chains will become much shorter due to the weekly Diffs.  And second, you would have the option of "thinning out" your backups.  Right now since you're only using Fulls and Incs, if you want to retain 60 days' worth of history, you need to retain all 60 Incrementals.  But if you set your retention policy to 8 Diffs and maybe 14 Incrementals, you could have daily backups going back for 2 weeks, and weekly backups going back 2 months.  So you'd still be able to reach back farther in time, even if you won't have daily backups available that far back.  But do you actually need individual DAILY backups going back that far?  If not, then you wouldn't have to retain your daily Incrementals for as long.

Just some food for thought.  But again, even if it became clear what caused your error, I think it would be worth considering a larger drive in order to implement a safer backup strategy.

Edited 7 August 2020 1:20 AM by jphughan
csadam
csadam
New Member
New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)New Member (4 reputation)
Group: Forum Members
Posts: 3, Visits: 19
Thank you for the detailed answer.

I'm not really concerned about the long incremental chains. If the full backup already takes half of the disk, the chance of error happens there is more than 50% so all of my Incs are already toasted.

My dream would be a solution where I connect an external disk to my PC, and it automatically keeps history backup of everything until the disk fills and when it's getting full then it deletes the oldest backups. That's why I tried the incremental forever solution, I thought it keeps a rolling history of last 60 backups, or less if there is not enough space.

I don't really need daily "resolution" after the past 1-2 weeks, so the recommended "monthly Fulls, weekly Diffs, and daily Incs" may be ok. The drawback of this solution is that it needs 2-4 times larger backup disk than all of my data, not to mention the disk rotation idea. Looks like I can't avoid shopping.

Maybe I should try something else for file level backups for the data drives and only use image backups for the OS disk. Because most of my big files never change, so making full backup of them monthly creates a lot of unnecessary duplications. I mean if something never changes, then it's technically enough to back it up once to have forever history of it.

About the disk full error, it turned out that the last incremental was created 4 months ago... I've pulled out the disk and forgot to plug back in since then. Possibly the 4 month of changes are more than 650GB.

jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)Macrium Evangelist (11K reputation)
Group: Forum Members
Posts: 7.9K, Visits: 55K
Yeah, you are in a bit of a bind here. You’ve only got one Full and it sounds like you might actually WANT only one Full due to size considerations and the fact that much of your data doesn’t change. But that means you can’t use automatic purging. You could have Reflect purge some Incrementals upfront, but even then you wouldn’t know how many you’d need to purge to free up enough space. Reflect currently will not purge backups within the current set to free up space, although offering that might be tricky because some users might want Incrementals purged first to reduce “resolution”, while others might want backups purged from oldest to newest.

In terms of alternatives, Windows File History by default operates exactly as you described (back up until full, then purge as needed), and it’s actually not a bad solution as a file-level backup tool. It doesn’t offer much customization, but it’s dead simple to set up, and it gives you very high resolution for data modified relatively recently.
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Login

Explore
Messages
Mentions
Search