Consolidation Failure - meaning of error message?


Author
Message
RayG
RayG
Advanced Member
Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)
Group: Forum Members
Posts: 241, Visits: 1.1K
I saw this in a log yesterday:
Incremental Backups   : 9 found
  Consolidation   : Merging 'Data #2017-07-28-15-40#-01-02.mrimg' to 'Data #2017-07-28-15-40#-02-03.mrimg'
  Failed      : Backup set contains split files and is not elligible for consolidation

Not quite sure what it actually means. The backups are not split into smaller segments at all.

The error is not an issue but what it means needs some explanation. I could not find it in the documentation I have at this time for V7.

Apart from the error message the spelling of 'elligible' should be 'eligible' only one 'L' at the beginning


Regards
RayG
Windows10 X64 V1803 B17134.228 MR v7.1.3317

Edited 3 October 2017 5:51 PM by RayG
Nick
Nick
Macrium Representative
Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)
Group: Administrators
Posts: 1.5K, Visits: 8.4K
Hi RayG

This message indicates that at least one file is split and consolidations cannot operate on a backup set with splits. 

The first incremental  'Data #2017-07-28-15-40#-01-02.mrimg' is split into 2 parts:

Data #2017-07-28-15-40#-01-01.mrimg
Data #2017-07-28-15-40#-01-02.mrimg

This is indicated by the last number before the '.mrimg' extension. Please check the log for that backup. 

The typo has been corrected. Thanks for pointing it out. 

Kind Regards

Nick - Macrium Support

RayG
RayG
Advanced Member
Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)
Group: Forum Members
Posts: 241, Visits: 1.1K
Nick,
It would seem that this was caused by running out of space. I would have expected the space monitoring to have deleted the oldest files and not split the backup in to two parts. This causes these unexpected issues when it is possibly too late to do anthything about it.

It the monitoring a separate thread running in parallel


Saving Partition - Data (D:)
Reading File System Bitmap
Saving Partition
     Free space low:2.00 GB required 513.8 MB available
     Backup Sets:4 sets found
     Delete File:W:\Macrium Backup\Data\Data #2017-04-24-18-49#-00-00.mrimg
     2.00 GB required 25.74 GB available
Verifying 'Data #2017-07-28-15-40#-00-00.mrimg'
Ok. Continuing...
     New File: 19 GBData #2017-07-28-15-40#-00-00.mrimg
No space on destination.
Destination directory changed to 'W:\Macrium Backup\Data\'

Saving Index
     New File: 7 GBData #2017-07-28-15-40#-00-01.mrimg
Verifying 'Data #2017-07-28-15-40#-00-01.mrimg'
Ok. Continuing...



Regards
RayG
Windows10 X64 V1803 B17134.228 MR v7.1.3317

Nick
Nick
Macrium Representative
Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)Macrium Representative (2.6K reputation)
Group: Administrators
Posts: 1.5K, Visits: 8.4K
 I would have expected the space monitoring to have deleted the oldest files and not split the backup in to two parts.


If there was more than one backup set in the folder then the oldest set would have been purged automatically. The threshold is set by the backup definition setting.  

Consolidation and deletion of files within a set is controlled by the retention settings. This isn't automated by the available space. To automate this could lead to unexpected results. Deleting the 'oldest' could break the backup chain for example, or take out the entire set if the oldest was a full. Silently consolidating more files than specified in retention would also be undesirable. 

The reason why the file split is unclear, I suspect that the amount of free space fluctuated at that time and enabled the backup to continue after splitting. We will investigate to see if we can reproduce. 

The correct action would have been to fail the backup if it couldn't complete or not split if space became available. 


Kind Regards

Nick - Macrium Support

Edited 3 October 2017 7:12 PM by Nick
RayG
RayG
Advanced Member
Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)
Group: Forum Members
Posts: 241, Visits: 1.1K
@Nick

Apologies for not replying been away for a few days. I thought the idea was the when space fell below the set limit space was created by deleting the older image(s) and associated files, and the backup continues as normal. I was not aware at this point in time that backup files would be split. If space becomes available what is the reason for splitting or failing the backup? If the split or fail is the result is it better to increase the value and give some process more time to pick up on the shortage and do something about it. If that does not work then given the failure I experienced later, there is not a lot of point in the facility in the first place? Have I misunderstood?

Edited: typos.


Regards
RayG
Windows10 X64 V1803 B17134.228 MR v7.1.3317

Edited 9 October 2017 5:03 PM by RayG
jphughan
jphughan
Most Valuable Professional
Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)
Group: Forum Members
Posts: 3.4K, Visits: 25K
I'm curious about this too.  If the low disk space purge threshold is reached in the middle of the backup job, I figured Reflect would purge older sets (assuming sets older than the current one are available) and then continue writing to the existing file.  Does it instead have to close out the first file before performing the purge and then open a new file to complete the job?  If that's not necessary, I don't see how splitting a file would make any difference from a disk space perspective, unless Reflect wrote the different files to different locations if alternate locations were defined and available and adequate capacity could not be freed up at the initial location, but that would break the concept of locations being independent and self-contained.

If splitting files when performing a mid-backup purge is unavoidable, then it might be nice to have a way to manually "repair" sets where this occurred and therefore rendered consolidation impossible, since creating a new set and writing off the "broken" one may not always be feasible. How difficult would it be to create a standalone "Join.exe" file that could combine split Reflect files, similar to the standalone "Consolidate.exe" that already exists?  I recognize that even this might not be a viable solution for everyone given that it would require free space at least equal to the total size of the split files to be available temporarily, which might not be feasible in a scenario where low disk space created the problem to begin with, but it would still be nice to have as an option.  Or would there be concerns about creating an "alternate version" of a backup within a set and also about auto-deleting the original files in order to mitigate that problem?

Edited 9 October 2017 5:18 PM by jphughan
jphughan
jphughan
Most Valuable Professional
Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)
Group: Forum Members
Posts: 3.4K, Visits: 25K
@Ray, the disk space purge only pertains to older sets, and if it's triggered, it will always purge the older set(s) in their entirety rather than just the oldest backups within that set.  Nick previously described the low disk space threshold option to me as a "blunt instrument" for that reason, and explained that it's intended as a last resort in order to allow new backups to complete, but that the recommended practice for managing free space is to set a retention policy that can be accommodated by your destination's capacity since it operates with more granularity.  However, the current set (the most recent Full and all of its children) is never considered for purging under this option, for the reasons Nick already explained -- so when you say that your understanding of this option is that "older images / the oldest files" will be purged, it depends on whether or not they're in an older set.  For someone using the Incrementals Forever with Synthetic Full strategy, for example, there will only ever be one set, so this option would never be triggered. (UPDATE: That sentence assumes normal operation of a Synthetic Full strategy. In a post farther down, I provide examples of atypical cases where multiple sets might be created even with a Synthetic Full strategy, in which case the low disk space threshold option could potentially be triggered.)

And rereading this thread more carefully, it sounds like the split was unexpected behavior, not by design.  Nick even said that the correct action would have been something else -- so my post above is probably completely unnecessary.

Edited 18 January 2018 3:03 PM by jphughan
RayG
RayG
Advanced Member
Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)Advanced Member (497 reputation)
Group: Forum Members
Posts: 241, Visits: 1.1K
@jphughan

There are at least 4 full backups and three of those, the most recent, have incrementals and differentials associated with them so at the very least the oldest full backup (which by this time has no associated incrementals/differentials) will ALWAYS create enough space for what is happening and if on the rarest of occasions that is not enough then the next oldest ceratinly will. You will see that in the posts above, when the 2GB libit was broken the deletion created 25GB of space. So with 19GB already created and a further 7GB second (split) file, that would have left at least 18GB free on the disk after the backup had finished.


Regards
RayG
Windows10 X64 V1803 B17134.228 MR v7.1.3317

knthrpl
knthrpl
New Member
New Member (2 reputation)New Member (2 reputation)New Member (2 reputation)New Member (2 reputation)New Member (2 reputation)New Member (2 reputation)New Member (2 reputation)New Member (2 reputation)New Member (2 reputation)
Group: Forum Members
Posts: 1, Visits: 4
jphughan - 9 October 2017 5:24 PM
@Ray, the disk space purge only pertains to older sets, and if it's triggered, it will always purge the older set(s) in their entirety rather than just the oldest backups within that set.  Nick previously described the low disk space threshold option to me as a "blunt instrument" for that reason, and explained that it's intended as a last resort in order to allow new backups to complete, but that the recommended practice for managing free space is to set a retention policy that can be accommodated by your destination's capacity since it operates with more granularity.  However, the current set (the most recent Full and all of its children) is never considered for purging under this option, for the reasons Nick already explained -- so when you say that your understanding of this option is that "older images / the oldest files" will be purged, it depends on whether or not they're in an older set.  For someone using the Incrementals Forever with Synthetic Full strategy, for example, there will only ever be one set, so this option would never be triggered.

And rereading this thread more carefully, it sounds like the split was unexpected behavior, not by design.  Nick even said that the correct action would have been something else -- so my post above is probably completely unnecessary.

Being an old documentation writer, I would like to have seen in the doc that a Synthetic Full will NEVER trigger the low-space deletion function.  I would have chosen a different strategy for my first set.  When I created the set, I was given the choice of "Synthetic Full" with no warning, and the header of the set even continues to say that the low-space deletion function is active.  It's misleading and could be easily corrected.
jphughan
jphughan
Most Valuable Professional
Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)Most Valuable Professional (4.9K reputation)
Group: Forum Members
Posts: 3.4K, Visits: 25K
knthrpl - 18 January 2018 6:09 AM
Being an old documentation writer, I would like to have seen in the doc that a Synthetic Full will NEVER trigger the low-space deletion function.  I would have chosen a different strategy for my first set.  When I created the set, I was given the choice of "Synthetic Full" with no warning, and the header of the set even continues to say that the low-space deletion function is active.  It's misleading and could be easily corrected.

I've added an update to my earlier post that you quoted, but technically, saying that enabling Synthetic Fulls would NEVER trigger the low-space deletion function wouldn't be accurate.  For example, suppose you had Synthetic Fulls enabled but at one point you manually decided to run a new Full for whatever reason -- scratch space bloat in the root Full from an unusually large consolidation, because "it's been a while, so you feel you should".  Or maybe you even configured a schedule to create a new Full at some interval despite also wanting to use Synthetic Fulls between them. In those cases, the low disk space purge would be available because thanks to your new Full, you would now have an older set.  Or if you run Windows 10 and your partition map gets changed during an upgrade to the newer release, as has occurred multiple times, Reflect would HAVE to create a new Full at the next backup, and if you had decided to address the "non-matching backups" issue that this upgrade behavior creates by changing your backup set matching setting to "All backups in the destination folder", then once again, the older set would be a candidate for deletion when disk space ran low.  Therefore, having the documentation say that low disk space deletions will "NEVER" happen with that strategy would itself be misleading, and arguably in a higher-risk fashion.  I would say that a user finding that Reflect had deleted older backups after having been told that this would NEVER happen is worse than a user finding that their most recent backup had failed because a disk space threshold they had assumed would work, didn't.  Also keep in mind that because this option can be triggered while creating these "atypical" Fulls, it could potentially wipe out ALL of your old backups before you even have a new one, so if the new backup subsequently fails, you're left with no backups.  If Reflect's documentation told me that my chosen backup strategy would never trigger this function and then I found it had created this situation, I would be displeased with Macrium to say the least.  Consequently, I would disagree that this should be added to the documentation.

And looking at it from the opposite perspective, Synthetic Fulls aren't the only strategy where disk space purges are never "supposed to" occur even under 100% normal circumstances.  Incremental Merge is another strategy that normally only creates one Full, and it would therefore exhibit the same behavior.  There are others here who have a more traditional GFS strategy but only perform annual Fulls, so they may well only have one set on their disk at any given time purely for capacity reasons, and therefore even in that "traditional" strategy, they wouldn't be able to use this option either (except during a job to create a new Full).

The option is worded "Purge older sets", which already implies that if you don't have an older set, regardless of the strategy you've been using, then there will be nothing for that option to purge.

Edited 18 January 2018 3:25 PM by jphughan
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Similar Topics

Reading This Topic

Login

Explore
Messages
Mentions
Search