Macrium Support Forum

Cloning Multiple Dupes

https://forum.macrium.com/Topic42185.aspx

By tgwilkinson - 23 December 2020 1:34 AM

I'm struggling to set up Macrium for recurring, automated cloning to multiple backup copies and could use some help.

I have three drives for all of my work files: an original on which I do my work, a backup clone that is attached to my computer at all times that runs automatically at 2am, and a second backup clone that is stored off site. I swap out the two clones every month.

When swapping the two clones, I'm struggling to get the second backup clone to pick up where the last one left off when using the same filename, drive letter, and Macrium backup task.

What's the best way to set up the backup task and name the drive and/or assign a drive letter to get cloning two separate backups of the same main drive to work?
Can I use the same task, filename, and drive letter for both clones?
Or do I need separate tasks, but I'm ok using the same drive letter and drive name? (for instance - T: Backup - Photo 1 for both backup drives)
Or do I need separate tasks, drive letters, and drive names for the two clones? (for instance - T: Backup - Photo 1A, U: Backup - Photo 1B)

I know some of this has to do with how windows perceives a drive as being the same or different from another drive, and I'm still learning my way around Windows so any insight you have on all of this would be hugely appreciated. Thanks! - Tom
By jphughan - 23 December 2020 3:58 PM

Ok first of all, you mention cloning but then also mention picking up using the same “file name”, and then talk about backups. So are you performing clones or image backups? Those are not interchangeable terms. It seems like you’re performing image backups, but I want to be sure. If so, this isn’t difficult to set up. I have a client that performs image backups to a rotation of 9 destination disks that are swapped daily. The entire thing is managed from a single definition file. In my case I use a third-party utility to make sure that all destination disks in the rotation are always assigned the same drive letter regardless of which one is attached, but Reflect has another way to do this internally if you’d prefer, and if you’re only dealing with two destination disks that might be easier. But again first it has to be clear exactly what sort of job you’re trying to run here. And if you ARE running clones rather than image backups, I would suggest that switching at least one of those disks over to storing image backups rather than storing a second clone copy of your source disk would be a superior strategy. The advantage of a clone disk is that if your primary disk fails, you can immediately install that clone disk and get back to work. But the advantage of image backups is that you can store multiple historical backups, giving you access to older versions of your data rather than just a single version as is the case with a clone — but you do need to restore that image backup somewhere rather than being able to operate directly from the “image disk” as you can with a clone disk.
By tgwilkinson - 23 December 2020 7:38 PM

Sorry for the confusion. I am looking to clone the drives. I have versioning in my cloud backup (using Crashplan for Small Business) in case I need to roll back a change. I need my local backup cloned so I have an easy swap for zero downtime during the workday.

So with that said, any advice on the four key questions:
What's the best way to set up the backup task and name the drive and/or assign a drive letter to get cloning two separate backups of the same main drive to work?
Can I use the same task, filename, and drive letter for both clones?
Or do I need separate tasks, but I'm ok using the same drive letter and drive name? (for instance - T: Backup - Photo 1 for both backup drives)
Or do I need separate tasks, drive letters, and drive names for the two clones? (for instance - T: Backup - Photo 1A, U: Backup - Photo 1B)

Thanks again!
By jphughan - 23 December 2020 8:13 PM

Ok, that helps.  If you want to clone Source Disk A to both Target Disk B and Target Disk C at different times, you want two separate definition files.  (I'm not sure what you mean when you refer to "tasks" and "filenames", unfortunately.)  In terms of drive letters, clones don't directly use drive letters because Windows assigns them dynamically.  Under the hood, a clone job says "For this clone job, the disk with unique identifier ABC is the source, and the disk with unique identifier XYZ is the target, and Partitions 1-4 [or whatever] on the source need to be cloned."  This means that Reflect will find the right disk even if a volume on it is assigned a different drive letter.

So here's your step by step:
  1. Connect one of your clone target disks.
  2. In Reflect, select your SOURCE disk and select "Clone this disk".
  3. In the wizard that appears, select your target disk as the destination and select which partition(s) from the source you want to clone.  If you want everything, just click Next.
  4. Optionally define a schedule if desired.
  5. When you click Finish, if you want to run the job now, keep that box checked.  But either way, check the box to save the job as an XML definition file and name this job "Clone to Target 1" or whatever.
  6. Repeat Steps 1-4 above except this time choose your second target disk as the destination and name it something else to differentiate it.
At this point, in your Backup Definition Files tab, you should see two definition files, one for each target disk.  At any given time when you want to clone to that disk, right-click it and select Run Now.

In terms of drive letters, Windows mostly handles that.  If you want to define custom drive letters, use different letters for each destination disk.  Again, Reflect does NOT rely on drive letters for finding clone targets. But if you for example manually set the first target disk to X, then disconnect it and set the second target disk to X, then Windows will forget that X was ever supposed to be used for the first disk. If you pick different letters for each, then Windows will remember those special assignments for each disk as long as nothing ELSE ever uses those special letters -- which is why it's good to use letters farther down in the alphabet to avoid those letters getting used by the default "next available" letter assignment.

In terms of "drive name", I'm assuming you mean volume label. The source disk's volume label will be cloned to the target. You can rename the target afterward if you like, but that will get overwritten each time you use a new clone.  There's an enhancement request around here somewhere to specifically allow specifying a different volume label for the target to make it easier to differentiate source and destinations, but that's not available today.
By tgwilkinson - 23 December 2020 11:30 PM

This is super helpful! Thank you so much Smile

I think all of this makes sense. Let me say it back to you and pose a few questions along the way to make sure I'm tracking correctly.

To clone to two separate target disks I need two separate definition files (what I was calling tasks). One pointing to Target 1 and the second pointing to Target 2. I can set up each of those to run on a timer every night when I save the XML file. Will Macrium giving me a frequent warning if I don't have one of those Target disks connected while it's stored off-site for a month? And if Macrium does give warnings is there a place where I can turn warnings off or lessen the frequency at which warnings appear? I'm very on top of my backups, so I'm not worried about forgetting, and unnecessary warnings annoy me a lot, so I want to avoid them whenever possible.

Every hard drive has a unique identifier within Windows. Whether it's ABC for one drive or XYZ for another drive. Is there a place where I can see this unique identifier to note it for my records?

If I assign Target 1 a drive letter of Z, then I unplug Target 1 and assign Target 2 also a drive letter of Z, when I unplug Target 2 and plug Target 1 back in Windows won't remember that it had a drive letter of Z and will automatically assign it the next available slot. Do I have that right? I prefer assigning drive letters so I can keep my backups grouped together in the second half of the alphabet so I don't confuse them with my working drives/source drives. I currently have ten backup drives total between the backup set on site (5 drives) and the backup set stored for the month off site (another 5 drives), so if I'm understanding you correctly then I need to use a unique letter for each of the ten drives which puts my backup drives on drive letters Q-Z to fit them all in since Target 1 & Target 2 can't both be assigned the same drive letter. Am I on the right track?

When I clone a drive, Macrium will assign the cloned Target 1 drive the same volume label as the source disk. If the source disk was called Photo 1, I can't have my cloned drive called Backup - Photo 1 to distinguish it from the source drive. Macrium does not currently have this functionality (that I very much want!), but they are aware that people are requesting it. Is there a way to run a script at the end of a clone operation to rename a target volume after the clone operation is done? Even with my backup drives assigned letters in the second half of the alphabet, I'm still worried about accidentally confusing which is my working drive one day and then cloning over that day's work because the work was saved on the backup by mistake; it's unlikely, but I'd prefer to eliminate the risk. If that happened I would still have the versioned copy in the cloud, but I may not immediately notice my mistake the next day when I start working on a different set of files that don't immediately overlap with the previous day's work and then restoring the missing work would be complicated. The ability to rename a target volume after cloning is huge. I would love to see this functionality sometime soon. Is there another way besides running a script to pull this off? Is there a place I can add my enhancement request to the pile?

Thanks again! You're extremely clear and helpful, and I appreciate the time you put into responding.

- Tom
By jphughan - 24 December 2020 12:15 AM

Happy to help!  In terms of your additional questions:

If the destination isn't available at the time a clone job fires, the job fails with an error that the destination wasn't available. And the Log tab will show a job log with a failure result. There's no preflight check or warning before a Reflect job runs.  In terms of notifications, Reflect will show the Windows "toast" style notifications when a job starts and finishes, but otherwise you don't have any notification options for clones.  Backups (image and F&F backups) have email notification options, but those aren't available for clones today.  That will be coming with Reflect V8 though.  But if one of your targets will only be connected once per month, I wouldn't recommend setting up a daily schedule for that job.  You could set a monthly schedule if you're that precise about exactly when the off-site disk will be connected, or (the solution I'd recommend) you could just keep that definition file in Reflect with no formal schedule at all and just run the job on a purely manual basis.  If you're already going through the manual effort of bringing a disk back from an off-site location, it seems likely that you'd remember to actually run a clone job to it, so a purely manual routine there doesn't seem like a problem unless I'm missing something here.

The unique identifier isn't unique to the disk at a hardware level.  It's an identifier that gets randomly generated whenever a disk is initialized.  If you were to "clean" the entire disk, i.e. to mark the entire disk as empty (which is different from just formatting a partition), then when you initialized the disk again as either MBR or GPT, you'd get a new identifier.  But if you want to see the identifiers, the screenshots below show you where you can view them in Reflect, one from the "Create a backup" tab that shows your currently connected disks, and the other showing the identifiers within the settings of a clone job definition file.  GPT identifiers use a different format from MBR identifiers, and my screenshots show one of each in both places.  A note on the identifier style for MBR disks.  Reflect shows that identifier in hex, but at least some Windows tools show that same identifier in decimal.  So if you were to look at the same disk through Windows tools, it would look different.

One other thing to be aware of: If you ever do install one of your clone disks as the new "primary" disk, you will now have a clone job -- possibly with a schedule associated with it -- that still specifies your new primary disk as the destination of the clone job.  That will not be what you want in that situation, of course.  Fortunately though, those jobs will fail because Reflect won't be able to take that disk offline while Windows is running from it.  But you will have to edit your definition files if you want to proceed with clone jobs in that scenario in order to make your former destination disk your new source.

You're correct about how Windows handles drive letters.  But if you're dealing with 10 disks, that's a large chunk of the alphabet. In that case you might want to enlist the assistance of another application that I use for a client, called USB Drive Letter Manager.  It's free for personal use and inexpensive for commercial use.  Despite the name, it works with disks other than USB and can do a whole lot more than manage drive letters, but I'm only using it for what the name implies.  It can be configured to assign drive letters in all sorts of ways, but in my use case that might appeal to you, I have it set so that whenever a new USB disk is connected, it checks all volumes on that disk for a file at a specified path, in my case \DriveLetter.ini.  If it finds that file, it reads that file and then assigns that volume the drive letter stored in that file.  And then on all 9 disks in my client's backup disk rotation, I have an identical copy of that file saying that the volume should be assigned drive letter Z.  As long as I don't have more than one of those disks connected at a time, each disk will be Z when connected.  And if I ever DO have more than one disk connected, disks after the first one just get the default "next available" assignment for that specific occasion -- but if that disk were subsequently connected on it own, it would be Z again.  The catch in your case will be that you're dealing with clone disks, not image backup disks, which means the only way for that type of file to get there would be if it also existed on the source.  Having USBDLM look only at USB disks rather than SATA/RAID disks might allow you to create this type of drive letter file on your source disk so that it ends up on all of the destination disks and USBDLM only ever reads it when it exists on those USB disks.  Otherwise, there are other ways you can have USBDLM assign letters.  The documentation for that application is pretty good.

In terms of post-clone volume label changes, you could achieve that with a script.  Reflect will even generate a script based on a definition file if you right-click the definition file and select "Generate [PowerShell/VBS] Script".  In the wizard that pops up you'll even see some options to customize pre- and post-job operations, such as running an application before and/or after the job.  But even if you don't use any of that, if you know PowerShell or VBS, you can simply customize the script that Reflect generates in order to add the desired functionality.  Then you'd associate your schedules and manual executions with the script rather than the definition file.  When the script runs, among its operations will be calling Reflect to run the linked definition file, but you can also have it do other things before and/or after.  The trick will be making sure you target the correct partition for volume label changes.  If you have some way to guarantee drive letter assignment of the clone target, you could use that, e.g. "Change the volume label of drive Z to [whatever]".  The other option that occurred to me was to use the unique volume identifier.  Just like whole disks have unique identifiers, so too do individual volumes.  But those get regenerated every time the volume is formatted, and I'm not sure what happens with a clone.  I can't test this at the moment, but I suspect that in a clone scenario, Reflect might regenerate the volume ID of the target because Windows may not allow two volumes with the same ID to be mounted at the same time.  That would be a problem in a case like this since there would be no way to know what the volume identifier will be after a clone.  But I'll tell you what. If you figure out a way to get consistent drive letter assignment to your clone targets, then I will help you with the scripting piece.  I know a bit of PowerShell, and a command that says, "Rename drive Z to [this]" would be pretty simple.

Here are the screenshots I mentioned earlier showing the unique identifiers of GPT and MBR disks. Enjoy! Smile


By tgwilkinson - 25 December 2020 9:27 AM

So the toast style warning will pop up when the cloning happens and then disappear, but there won't be any lingering notification if a job fails unless I look in the log tab? I'll be asleep when the clone job runs, so toast notifications aren't an issue, but failed job notifications would be annoying, but it doesn't sound like there will be any.

When I go to my off-site storage I swap the five backup drives that I've had running for the last 30 days with the ones that have been sitting in storage and the out-of-date drives take their place for the next 30 days. For instance, the Target 1 set of drives will be used for say January and the Target 2 set for February, and then in March I'll go back and swap those out for the Target 1 set again for all of March. Should I make the definition files all run every night even if only half of the expected targets are attached at any one time? Or would it be better to modify the XML files at the start of each month so only half of them run? I'm assuming definition files be modified once they've been created.

Good reminder on revising definitions if a clone ever needs to swap in to be my primary working drive. Only one of my five main drives has Windows running on it, so it would definitely have been an issue if I hadn't remembered to do that; though if I am swapping in a clone for the source drive then the source drive is likely out of commission and unlikely to be connected going forward (and if it is it will have been re-initialized so Macrium won't recognize it because the Device ID will be different at that point, right?).

USB Drive Letter Manager is a great suggestion! I never would have found that on my own. Unfortunately, filtering by connection won't work for me: of my five source disks, two are Thunderbolt 3 RAID arrays (technically two partitions on a single four drive array), one is my internal M.2 SSD, and the last two drives are two partitions on a single USB 3.0 drive. All of the target drives are USB 3.0, so raid and M.2 should be easy to set up within USBDLM, but cloning the two USB drives to another USB drive poses a problem for assigning a drive letter after cloning.

I read through the manual and a few options stand out for designating the correct drives. First, do you think a Volume Serial Number or a Device ID would stick around on the Target disk after cloning or would it be replaced by the Source disk's values for those two things? Because that seems easy enough to set up if it would. The other option I came up with is that since I'm working with partitions on several of my backup drives I could make the partitions slightly odd sizes by a few hundred KB or MB to distinguish them from the original and use Drive Letters by Drive Size to assign drive letters.

Also, does USBDLM allow for and/or options so I could pair criteria for USBDLM to decide if a drive letter can be assigned or not? For instance, could I have it look at the volume label first, then if the label is what it's expecting (say Video 2) then the program would check the drive size and if that is also the expected value (say 7.9 TB instead of 8 TB like it is on the source drive) then it would assign the designated drive letter of V. Would that work?

And your offer to help with the PowerShell scripting is so kind. Thank you Smile

One concern I have with scripting regards when and how USBDLM kick in. After Macrium runs the clone what is the target drive's current drive letter: the one it had before the clone or the next available one Windows could assign? And does USBDLM kick in automatically after the clone to change the drive letter or does USBDLM need to be scripted to run after the clone job? Then, of course, after the correct drive letter has been assigned a script could run to assign the correct volume label based on the drive letter. But I think I'm a little confused about how drive letters are assigned during and after cloning, particularly once USBDLM is added into the mix.

And thank you for the screenshots! Is the unique ID there what the USBDLM manual referred to as a Volume Serial Number or is it a Device ID? I thought the two terms were the same thing, so I was surprised to find them separated out in the manual, which only made the screenshots all the more confusing. Haha.

But that said I'm starting to wrap my head around a game plan for how to set this up and that's thanks to you :
By jphughan - 25 December 2020 3:57 PM

Ok, I thought you had two disks here that were being swapped on a monthly basis.  But in fact you have TEN disks, and you want to use CLONING to ALL of them?  It’s unfortunate you’re not using image backups, because that would make all of this a whole lot easier. At my client that has the daily rotation of 9 disks, the entire image backup routine is managed by a single definition file that has a single schedule entry — Incremental backup every day — because they’re using an Incrementals Forever with Synthetic Fulls strategy.  And it works because the definition file just needs to target the Z drive each time, which USBDLM handles easily since the destination disks aren’t overwritten with image backups as occurs with clone backups.

I don’t know if USBDLM allows you to “chain” rules together so that it only assigns a letter if all of the criteria are met. I suppose you could give it a try by combining multiple rules as shown in the documentation examples and seeing whether it only matches when ALL conditions are met or when ANY conditions are met.

If you need to clone USB to USB in some cases, I agree that poses a challenge for USBDLM. In terms of solutions, I’m not entirely sure what happens to volume identifiers in a clone scenario. My guess is that the first time you clone Volume A (with Volume ID A) over to Volume B, you will find that Volume B has new Volume ID B, not the same ID as Volume A.  But going forward, if subsequent clones can use Reflect’s Rapid Delta Clone feature, then new Volume ID B will be retained, rather than you getting a new volume ID on the target every time. On that subject, if you use the partition size method, make sure the target is still at least as large as the source. Rapid Delta Clone can’t be used when the target is smaller than the source, and that capability can be a huge time saver depending on the amount of data you’re cloning.

In terms of post-clone drive letter assignment, when a clone completes the volume gets mounted for general use again (after having been taken offline so that Reflect gets exclusive access to it during cloning), and at that point what happens would be the same as what would happen if you had just connected that disk after having it disconnected. So without any other influences, Windows would assign that volume a drive letter however it would have otherwise. Or if you have USBDLM, then it will see that a new volume has appeared and then determine if it’s supposed to take any action on it based on the characteristics of the device/volume and your USBDLM configuration. In my scripting offer, the assumption was that you would have set up a routine whereby any time you connected a disk or finished a clone to that disk, it would get assigned a specific, predictable drive letter. And in that case, then the PowerShell script can just say, “Assign drive Z this volume label.”

As for my screenshot, Volume Serial Number, and Device ID, those are all different things. The Disk ID shown in Reflect applies to the entire disk. The Volume Serial Number is randomly generated when a volume is created and can be found using the “vol” command shown in the USBDLM manual. But that’s actually different from the volume unique identifier that I referred to above and that Reflect can use for identifying destinations for image backup jobs (so that drive letters aren’t a factor), which you can see in Reflect by clicking a specific partition and checking immediately underneath the “Details” heading in the navigation column along the left edge. The Device ID is a hardware-level identifier that does NOT change, but it’s also unique per PRODUCT, not per unique device — so if you had two identical WD USB 3.0 external drives, they’d have the same Device ID. And those are found on every type of product, not just storage devices.

I think you may need to experiment a bit here to see what happens with volume identifiers and Volume Serial Numbers after an initial clone and then after subsequent clones that use Rapid Delta Clone. If you find that either of them does NOT change when RDC was used, that may give you a good solution here, because if RDC can be used, then for the most part it will always be possible to use it. Swapping to a new source disk would likely throw a wrench into the works there, but those occasions are unlikely and presumably infrequent enough that it could still make sense to perform drive letter assignment based on something that you’ve found “survives” RDC cloning. If you ever encounter a situation where RDC couldn’t be used and therefore get a new volume serial number and/ or unique identifier, then the only thing that breaks is drive letter assignment. That wouldn’t be the end of the world, and typically after the first clone that couldn’t use RDC, subsequent clones would be able to again — unless there’s a condition that will always prevent RDC from being used, such as cloning to a partition that’s smaller than the source (regardless of how much space is being used) or cloning non-NTFS partitions.
By tgwilkinson - 28 December 2020 8:38 PM

Sounds good! I'll play around with it this week to see how I can make drive letters stick and let you know how it goes.
By tgwilkinson - 29 December 2020 5:47 AM

I setup USBDLM today. I have it using DeviceID to assign drive letters. I read a cybersecurity company's article that said DeviceID is written into the firmware of the HDD, which should mean it will stick around after cloning. Now I just need to test that theory.

I went to test it today with Macrium, and I was reminded of the other problem I kept running into with cloning using Macrium. One of my backup target drives is 14 TB and, using partitions on that 14 TB drive, it's supposed to serve as the backup drive for four separate hard drives -- two are 2 TB M.2 internal SSDs and two are 3-6 TB partitions on a single USB drive (so the source is three physical drives total going to a single physical target drive with four partitions).

But when I go to clone the source partitions, Macrium wants to make all new partitions on the target drive (as shown in the first and second images below). I cannot figure out how to make Macrium clone the source partitions and drives onto the already existing partitions I made for them. The sizes match perfectly down to the byte, but the program doesn't want to play nice.

In the example from the pictures below, how do I make E: Music 1 clone onto U: Music 1 (one of the middle partitions)? At the moment, it seems like Macrium will only allow me to clone E: Music 1 to either the first partition (SSmile, which it will resize to make that work, which doesn't make any sense to me because U: Music 1 is the perfect size for it and I can't figure out why Macrium wouldn't default to the perfectly sized available option.

Since that wasn't working I decided to try another option and clone both partitions on the source drive (E: Music 1 & H: Video 1) to see what would happen and (as you can see in the third image) Macrium would then decide to erase all three pre-existing partitions on my target drive and start from scratch with a partition labelled S: and a second partition which will be auto-assigned a drive letter.

How can I set up my clone job/definition file so Macrium is cloning to the correct partition with E: Music 1 going to U: Music 1? And how can I set it up so that I'll later be able to clone all four partitions/drives to this one physical target drive partitioned into four?

(note: the gap, the unassigned drive space of 1.82 TB in that first image is to backup a drive that doesn't exist yet. It'll be labelled T9 SSD 2, so that's why it's positioned there, but again the source drive doesn't exist yet, so I didn't see the point of creating the extra backup drive (TSmile just yet. Just trying to futureproof wherever possible).





By jphughan - 29 December 2020 5:55 AM

I helped someone else set up this routine of cloning multiple source disks onto a single target.  The easiest method is to manually partition the target disk as needed to accommodate all of the intended source partitions.  Just create the necessary number of partitions of the required size and leave them empty.  Then open Reflect.  Select the first disk that will be one of the sources and click "Clone this disk".  Then in that wizard, drag each source partition you want to clone onto the desired empty destination partition. Do NOT use the "Copy selected partitions" function, since that will cause the source partitions to be copied to the destination in a way that the destination partitions will match the size and "offset" (position on disk) of the source partitions. That is not what you want here.  By comparison, the drag and drop method tells Reflect, "Clone this source partition into this existing destination partition, even if the destination partition is a different size and/or is located in a different portion of the target disk."  Then click Finish and choose to save this clone job as an XML file.  Finally, repeat this process for each of the other disks that need to be a source disk for this clone setup.  Obviously make sure you don't end up with different clone jobs targeting the same partition on the destination disk.
By tgwilkinson - 3 January 2021 6:09 AM

Ah ha! Drag and drop! I see where it says those words at the top of the dialogue box now, but I had skipped over it because it's not obvious it would have a different effect than the Copy Selected Partitions method. Good catch! I have my definition files setup, but I ran into two additional problems when I ran the clone (I haven't had this many problems setting up a piece of software in years! so I appreciate your help).



The first is that I set up the first definition file to clone the partitions on one drive to two pre-formatted partitions on my backup destination drive (as shown in the first picture). The backup ran for two days, which seemed slow, but Macrium is still new to me and perhaps that's normal? After two days, when the clone completed, my destination drive had lost one of the partitions it had just backed up. The partition X: Video 1 was gone (I unfortunately forgot to take a screenshot with the missing partition), while the other partition had successfully cloned and was still in the same section of the disk.

So I set up the X: partition again and ran the same clone definition again. I expected it to go slightly faster this time because there was already a copy of the Music 1 data on the backup destination and I have rapid delta clone turned on, so the Music 1 portion of the clone operation should have flown by and most of the cloning time should have gone to the Video 1 partition clone—but the backup took another two days. At the end of cloning, I had another missing partition, but this time it was reversed: The U: Music 1 partition was missing and it was the X: Video 1 partition that had stuck around this time instead of disappearing (as shown in the second picture).

In contrast, the previous clone operation from the SAME definition file went S: T9 SSD, empty partition, U: Music 1, and then an empty partition in place of the X: Video 1.

So that's the first big mystery. The partitions are the same size as their source, which is what you recommended as necessary (which makes sense!), so I don't know why this is happening. Hopefully this problem isn't too hard to solve.

The bigger issue is that even testing out a clone job right now is PAINFUL because of how long it takes. I opened the Task Manager during the second attempt at cloning and discovered that there's a bottleneck somewhere, but I don't know how to troubleshoot it. The fastest transfer rate I'm seeing when using Macrium caps out at 29.6 MB/s, and sometimes dipped as low as 0.6 MB/s sustained transfer speed (see the third screenshot below), even though I'm transferring from a USB 3.0 drive to another USB 3.0 drive using the original WD USB 3.0 cables, and both are plugged directly into the motherboard's 3.0 USB ports and NOT into a hub (though I do have hubs attached for other peripherals). Here are four sample screenshots from over the course of the two days that show how low the transfer rate held at:

1. 

2.

3.

4.

That fourth screenshot especially hurts because I'm getting faster backup to the cloud with Code42's CrashPlan (36.4 MB/s) than I am from a local disk transfer (1.5 MB/s)!

The other thing I noticed is that in every screenshot except the fourth one Macrium's disk transfer speed and the System disk transfer speed are almost identical. I don't know if that's how Macrium is set up to work, or if the bottleneck is filtering down to other applications as well. In the last twelve hours of the second clone job the taskbar, Google Chrome, and the Task Manager became unresponsive. However, during that same time I was still able to click play on a game in Steam (I already had a Steam window open when everything else became unresponsive) and I was able to play the newest game in 1440p with ray tracing and everything without a hitch despite the I/O errors. The game is on the M.2 internal SSD, so that probably plays a role, but it's interesting none the less in trying to diagnose this.

The slow transfer rate also doesn't make sense because for most other applications the drive works fine. I ran a couple CrystalDiskMark tests and I'm getting the 110-150 MB/s data transfer rates I'm expecting.

Source:


Destination:


So the drives seem to be connected correctly, but something is wrong somewhere.

The only other time I have trouble like this with I/O stuff is using Adobe Lightroom to edit photos. The program will work relatively fine for the first half of a workday, but then a few hours into rating, labelling, and editing images (all of which have tiny read/write operations to a database file, as well as the initial read of the larger, original image file) all those changes add up somewhere in the system and the program will bog down—every operation then takes 1000x longer than it should, especially when I'm pulling metadata from multiple image files at once—until the program either becomes unresponsive or shifts into a "redraw the screen" state where the Lightroom window will toggle between a partial full screen mode and true full screen mode on infinite loop, even though I didn't trigger a change in full screen modes, with no way to stop it except a hard restart.

It seems like another example of a bottleneck in my system that eventually grinds everything to a halt and often makes the taskbar and File Explorer unresponsive similar to what I'm seeing happen with Macrium. I only bring it up because perhaps the problems are linked, perhaps not. But I though maybe one other example of a time I see similar system behavior might help in diagnosing the problem.

I'm at a real loss here for how to troubleshoot this, so any insight or help you can offer would be hugely appreciated. Let me know if sharing the specs of my system or some other piece of information would be helpful.

Thanks again,
Tom
By jphughan - 3 January 2021 7:56 AM

Hey Tom,

Well that's a rather mixed bag of findings!  I'll address what I can here.

First, looking at the first screenshot in your post above, it does appear that you've staged the clone perfectly correctly for what you're trying to do.  So you're saying that when you actually proceeded to run that clone you had staged, exactly as shown in that screenshot, the job cloned two partitions but only one of them actually existed on the destination when all was said and done?  And then the second time around you ran the same job and the OPPOSITE partition disappeared?  Unfortunately I can't account for what happened there at all.  That might be worth contacting Macrium Support about, because I can't imagine why it would even be POSSIBLE to stage a clone that would result in 2 partitions being cloned but one of them not actually existing at the end.  My GUESS as to what MAY be happening here is that somehow the partitions are being staged with some sector overlap, in which case whichever partition gets cloned second ends up surviving because it got laid down such that it partially encroached on the sector space of the adjacent partition, thereby rendering that other partition effectively gone.  And if so, this may be some bug that only gets triggered when dealing with large disks or something.  I don't have disks of the size you're working with to test with.  And again, that's just a guess.  But if you want to look into it, check the logs of the two clone jobs you ran to see the sequence of cloning for each one.  Given that a different partition survived on your two different attempts, do the logs indicate that the partitions were cloned in a different sequence on each of those two attempts?  If so, was the partition that ended up surviving after each attempt the first partition to be cloned, or the second?

As to your transfer rate, I've never cloned anything close to 4.76 TB before, but using 48 hours based on your note that it took 2 days, that would suggest an average transfer rate of just 27.54 MB/s.  That's roughly USB 2.0 speeds.  I can think of two possibilities right off the bat:
  • Third-party anti-virus can sometimes interfere with Reflect's operations, severely degrading its performance.  Are you running any such AV, other than Windows Defender?
  • HDDs that use SMR (shingled magnetic recording) can have very poor performance under workloads that involve sustained write activity, as would be the case for the target disk of a clone operation.  Does your target disk use SMR?
By tgwilkinson - 5 January 2021 9:38 PM

Good news! I got half of my backups working.

I'm using Western Digital drives and only their 2-6 TB drives seem to use SMR, and I'm using 8 TB and 14 TB drives. I also checked the Macrium logs like you suggested and discovered that, while my USB 3.0 to USB 3.0 backups were going super slow, my internal Samsung SSD to USB 3.0 backup was working correctly at blazing fast speeds. So the kind of destination drive I'm using doesn't appear to be the source of the issue.

For what it's worth, I'm also not running any third-party anti-virus software. The closest I get to that is Code42's CrashPlan cloud backup service which runs constantly in the background logging which files have been changed so it can back those up to the cloud. But now that I've gotten a few of the backups working, I've tested running Macrium clones with CrashPlan running as well as paused and it doesn't seem to make an appreciable difference since the fastest transfer rates I'm seeing with CrashPlan cap out at 30 MB/s due to my cable internet speeds. I could see potentially pausing the CrashPlan backups for an hour or two when Macrium is set to run, but I'm not sure it'll make a difference in the long run.

So onto the solution! After reading an old post on this forum about Logitech wireless mouse dongles causing slow downs in Macrium backups, and another article related to my problems with slowness/crashing in Adobe Lightroom that suggested keeping my mouse and tablet drivers up to date, I decided to investigate the state of my mouse.

I made a few changes, and I'm not sure which of them solved the problem, so I'll list them all. First I tried unplugging the wireless dongle, like suggested in the linked Macrium forum thread above, but that did nothing for my transfer speeds. Next I tried to open Logitech's G Hub application (which manages the wireless mouse settings like DPI and button assignment) to check out my drivers and the program hung on the opening screen. I'd seen this behavior before, where the program hangs indefinitely, but didn't pay much attention to it in the past because I keep all my mouse settings loaded onto my mouse's internal memory. So I did two things next: first I repaired/reinstalled G Hub to get the most up-to-date mouse drivers installed and working on my system, and then, since I have my mouse in onboard memory mode and don't actually need G Hub managing my mouse settings in real time, I went into the G Hub settings and unchecked the box for "always start after logging in." And between those things, the next time I ran a USB 3.0 to USB 3.0 backup it worked perfectly. Transfer rates were 150-200 MB/s for the entire backup process, and the transfers completed in a few hours.

I also set up all new backup definitions and, rather than creating partitions in advance (using MiniTool Partition Wizard), I dragged and dropped the source drives onto unallocated space and let Macrium make the partitions so there were no overlapping sectors (because I think you're right, the sectors were overlapping, and the order in which the backups happened determined which drive stuck around). I also separated out the Music 1 and Video 1 clones into two separate clone definitions rather than doing them back to back in the same definition in case that helped in some way.

I share all that (a) in case it helps someone else in the future and (b) because I still have two drives that backup at painful 30 MB/s or less speeds (again sometimes as low as 0.1 MB/s!). But at least this problem has been narrowed down to be a more specific problem Wink

So here's what I know about the backups that don't work and maybe this rings some bells for you: My source drive is an OWC Thunderbay 8—it has eight drive bays (though I'm only currently using four) and it connects to my PC using a Thunderbolt 3 cable into a Gigabye Z390 Designare motherboard which supports Thunderbolt (one of the few PC motherboards that do). The Thunderbay currently has four drives in it, all 4TB HGST UltraStar 7K4000 (HUS724040ALE640). The drives operate at 7200 RPM, use 4096-byte sectors, and have a 64MB drive cache (they're a few years old!). The drives are joined together into a Dynamic Disk using RAID 0. I created the Dynamic Disk/RAID using standard Windows Disk Management because I'm only a year into a being a full-time PC user and wasn't sure what else to use (Is there a better piece of software for this? Should I have set up anything special in my BIOS I might have missed?). I can read and write from the drives fine in everyday use, and I'm getting 400 MB/s read/write speeds on them in CrystalDiskMark even now that the drives are 50% full, so as sources they should be fine, but when it comes to Macrium they're the only disks I can't backup at this point without running into super slow transfer speeds again.

So with that setup it feels like the problem is something related to either the RAID/Dynamic Disk or it's an issue with Thunderbolt since I've got other backups working fine now...and this is where my knowledge runs out. So any suggestions on things to try would be hugely appreciated!
By jphughan - 5 January 2021 10:13 PM

Glad to hear you've made some progress!  As I was reading through your post and saw your findings that internal to USB 3 worked as expected but USB 3 to USB 3 didn't, my next theory was going to be a bad or just poorly designed USB host controller chipset that just crumpled when heavy I/O was occurring on both reads and writes simultaneously.  But I'm glad to hear that you found a fix, although I never would have expected a Logitech receiver/driver to wreak that sort of havoc!  That rather makes me worry about what sorts of things Logitech is hooking into, since that side effect really shouldn't be possible.  The dongle itself though can be an issue from a HARDWARE standpoint because USB 3.0 traffic and 2.4 GHz radio frequencies do not play well together.  That's why some routers that have USB 3.0 ports have those ports located in places that are unintuitive or downright inconvenient, such as on the front edge or on the sides near the front, rather than on the back where everything else is -- because placing the USB 3.0 port in that location minimizes interference with the WiFi antennas.  My own ASUS RT-AC88U router has its USB 3.0 port on the front, and if I had anything plugged into it, having that cable sticking out would rankle even my minimal aesthetic sensibilities, but fortunately I don't use it.  Still, I remember some reviewers criticizing this router's USB 3.0 port placement because they didn't realize that it was a concession to function over form.

Great to hear that your partition issue is resolved too.

In terms of your Thunderbolt storage array, I'm not a fan of dynamic disks or RAID 0.  Dynamic disks can introduce a variety of hassles, especially if you ever need to restore them (link), which I realize you're not doing.  But I mostly don't like them because I tend to avoid software-based RAID setups in favor of hardware-based RAID setups -- although the needle is swinging back to software in this era of software-defined everything (storage, networking, firewalls, etc.)  Windows has more recently introduced Storage Spaces, which I believe is supposed to be a replacement for dynamic disks that have been around since at least the Windows XP days, but I admit I haven't looked into Storage Spaces a great deal.  You may wish to, however.  But I'm REALLY not a fan of RAID 0 in any form for the simple reason that the failure of any disk causes the entire virtual disk to be lost, which means your risk of data loss is multiplied with every disk that you add, i.e. your risk of losing all of your data in a 4-disk RAID 0 is 4x greater than your risk of losing all of your data when you only have a single drive.  Especially if you have drive bays available in your enclosure, I would really encourage you to consider adding disks so that you can switch to a RAID level that adds redundancy, such as RAID 6 or 10, while still maintaining the amount of usable storage you require.  In a 4-disk setup, both RAID 6 and 10 would give you the same amount of usable storage.  But RAID 6 offers more "guaranteed" redundancy in that ANY two disks can fail and the array will still be fine, whereas with RAID 10, certain combinations of two disks can fail while keeping the array intact, but you're only guaranteed to be able to survive a single disk failure.  But RAID 6 suffers on write speeds due to the required double parity calculations.  In a 6-disk setup, everything I just said still applies, but the additional consideration is that RAID 6 will give you more usable storage because with RAID 6, the usable storage is the total number of disks minus 2, whereas with RAID 10 the usable storage is 50% of all disks.

All that said, I can't immediately account for why you'd be able to get solid read speeds from the array in CrystalDiskMark but have poor performance when making a backup of that array in Reflect.  But what is the destination of that backup job?  And if thus far you've only benchmarked the storage array (the source) and the destination separately, then you haven't looked at the whole picture yet.  In order to get a point of comparison to the Reflect scenario, you'd need to test performance when copying a large file from the storage array to your Reflect backup destination in Windows.  What sorts of transfer rates do you see for that test?
By tgwilkinson - 13 January 2021 9:01 PM

Ha! Funny. We're using the same router. The good news is my wireless dongles (for the mouse and Wacom tablet) are plugged into my monitor rather than my motherboard so they don't cause as much interference. And I have a 10' extension cable for the computer's WiFi antennas, so they're nowhere near the back of the case.

I copied and pasted a 10GB folder of medium sized image files (2k files total) and it copies at about 140-170 MB/s. I also tried a copy and paste of a 100GB folder of tiny cache files (the folder contained over 250k files) and it kept a 45-60 MB/s transfer speed. Both of those are much faster than Macrium could get maxed out as it was at < 30 MB/s. The transfer rate of the first copy and paste job was likely limited by the source drive being a single 3.0 USB drive. So the slow transfer rate seems to be limited to Macrium (and maybe other programs I'm unaware of yet).

I tried to follow your advice about Dynamic Disks to see if that fixed the problem. I backed up all the files on the raid enclosure and deleted the dynamic disk. But when I went to replace it with a Storage Space I got the following error:



I've updated to the most recent OS and Thunderbolt drivers. I'm using the Thunderbolt 3 cable that came with the enclosure. All of the disks are showing up in Disk Management and I can read/write to them. So I don't know what "check the drive connections" could mean in this context. I'm at a loss and stuck again.

As far as RAID goes, I've been running RAID 0 for six years without a serious issue. There's a reason I keep three backup copies! And it helps that I'm using enterprise hard drives rated for two million hours MTBF. Sure drives have failed, but they have a five-year warranty and get RMAed quickly, and I'm back up and running in no time. I wouldn't keep any fewer than my current three backup copies operating even if I were running a form of RAID with built in redundancy, so moving to another form of RAID feels like spending unnecessary money for a fourth backup copy I don't necessarily need. But I appreciate the explanation of RAID 6, which I was less familiar with. RAID 6 sounds appealing, but I'd need to read more about the performance hit to decide if it's worth it. At this point the biggest issue with changing anything with my RAID setup is that the drives are so old I could only get refurbished drives to match, and I wouldn't trust those. So I'd have to lay out over a thousand dollars put in 6-8 new drives to get RAID 6 or 10 working, which is out of the question for this year at least.
By jphughan - 13 January 2021 9:24 PM

Ok, so you’ve validated the data path outside of Reflect and it’s performing within expectations. Macrium has said here that Reflect uses standard Windows APIs for reading and writing backup files, which are also commonly used by other applications. So my guess is that something else on your system doesn’t like Reflect itself. Normally that’s third-party anti-virus, and the fix is to whitelist Reflect’s various processes as well as excluding backup destination folders from scans. But if that’s not it, this might be a case for Macrium Support.

As for Storage Spaces, as I said I haven’t looked into it in detail, but if you haven’t already, I’d Google that hex error code. I suspect there may be some hardware requirement that isn’t being met here. Maybe Thunderbolt/PCIe storage isn’t allowed, for example?
By tgwilkinson - 13 January 2021 9:34 PM

Where should I start with the whitelisting process? That's something I'm unfamiliar with.

And when you say to exclude backup destination folders from scans, which type of scans do you mean? And are those scans in Reflect or in Windows?

Searching the error code gave me back nothing useful, unfortunately, so I reached out to the enclosure manufacturer (OWC) to see if they could diagnose it.
By jphughan - 13 January 2021 9:52 PM

Whitelisting and folder exclusions would be accomplished in the interface of any third-party anti-malware solution you may be running, and the exact steps would vary from one solution to the next. But if you’re only running Windows Defender, then you shouldn’t have to do that.

I kind of doubt OWC will be able to help with that issue, but can’t hurt to ask. You might be better off looking for Microsoft TechNet documentation about Storage Spaces, in particular its hardware and software requirements.
By tgwilkinson - 21 January 2021 9:44 PM

All right. It's fixed. Wow was that stupid. I'll document what I did here in case it's useful to someone down the road:

I occasionally heard some dings indicating that USB devices were randomly disconnecting and reconnecting. So I installed Nirsoft's USBLogView to track what was happening. It turns out the USB ports/hub built into the back of my NEC MultiSync PA271W Monitor don't want to play nice with Windows. Even when I have nothing plugged into them the ports/hub still randomly unplugs and plugs itself back in. The monitor is seven and a half years old, so it is what it is.

I had two wireless dongles plugged into those ports: one for my Wacom Tablet and another for my Logitech G502 Lightspeed mouse. I unplugged both of those and, inspired by this forum post, I moved the Logitech wireless dongle off the USB 3 bus and, on the back of my PC, plugged it into the dedicated USB 2.0 mouse port. And le voilà Macrium is copying at full speed on both by USB and Thunderbolt 3 RAID enclosure now. I'm not going to test it any further because it's working, but either moving the wireless dongle off of the monitor port OR moving it to the USB 2.0 bus fixed the problem. And not only is Macrium getting better I/O speeds, but the entire computer feels more responsive. Huzzah!

So back to the original question of working with multiple clones. How do I go about setting up a script to rename drives (with dedicated letters thanks to USBDLM) after a clone operation has completed? I can't believe it took a month to get back to this Hehe
By jphughan - 21 January 2021 10:14 PM

Congrats!  Although I know firsthand how frustrating it is to end up having had to chase down a problem cause that shouldn't have been causing a problem in the first place.

In terms of the clone setup, it just now occurred to me that I'm not entirely sure that USBDLM will work in this specific scenario.  The reason is that I think -- but am not certain -- that USBDLM looks at drives when they are physically attached, e.g. plugged into a USB port (or whatever).  In the Reflect clone scenario, the target volume is taken offline to other access so that Reflect can clone onto it and then brought back online -- but that of course isn't technically the same as a device becoming completely disconnected and reconnected.  However, it's possible that USBDLM will look at the volume when Windows mounts it again, even if the underlying device remained connected the entire time, in which case this will work fine.  Again, this will just be something you'll have to test.

On that note, the first order of business is in fact to make sure that USBDLM works as desired before modifying a Reflect clone script, because if USBDLM isn't reliably assigning the cloned partition a specific drive letter, then the script won't reliably be able to find it.  So here's what I would suggest:
  1. Set up USBDLM to assign whatever drive letter you want based on whatever condition(s) you want, per our earlier discussion.
  2. Test that configuration by manually changing the cloned partition to some OTHER drive letter in Windows Disk Management, then running another clone to that disk.  Does USBDLM switch the cloned partition back to the desired drive letter when the Reflect clone finishes and the volume reappears in Windows?  You may also want to ensure that USBDLM does NOT alter the source partition's drive letter unexpectedly when the device hosting that partition is disconnected and reconnected.
If you've got USBDLM working as desired outside of Reflect, then here's what you'd do for the clone:
  1. In Reflect, go to the Backup Definition Files tab, right-click the definition file for your clone job, and select "Generate PowerShell script".
  2. Click OK in the wizard that appears since you don't need to make any customizations there.
  3. You'll now have a script file under the PowerShell Files tab in Reflect.  Right-click that and select Edit.  That should open the script in PowerShell ISE.
  4. Search for a line that reads "# Handle backup success..."
  5. Directly below that line, add these two lines:

    Start-Sleep -Seconds 5
    Set-Volume -DriveLetter "X" -NewFileSystemLabel "MyClone"
  6. Change the "X" above to the drive letter you're using for your clone destination, and change "MyClone" to whatever volume label you want to use.  The 5-second delay beforehand is intended to give Windows time to remount the volume and for USBDLM to look at it first.  It may or may not be necessary, but it should be more than enough time.
  7. To test this all out, deliberately change your clone target's drive letter to something else AND change its volume label to something else.  Then in Reflect, right-click the clone script -- under the PowerShell Files tab -- and select Run Now > Full (the backup type submenu is pointless for a clone job, but it works).  See what happens within a few seconds after the clone job completes.
  8. If all goes well and you want to schedule this, then make sure you associate your schedules with the script, not the definition file -- although you will still need to keep that definition file under the Backup Definition Files tab because the script refers to it.
Good luck!