By segnohh - 21 October 2019 9:29 PM
Hi, I use Macrium Reflect 7 on Windows 7 to clone my SSD as a regular method to backup on a HDD with the rapid delta clone option. Recently, a couple of times, I got read errors over my SSD during the cloning process and that stop the process with the consequence that all the data of the HDD are unaccessible (overwriting the last backup) and I lost the data. Fortunately, I used Linux and its commands to clone completely the SSD on the HDD and such routines did not stop the process when the read errors were detected, I could to clone the disk and to repair the errors. I think, such kind of issues are avoidable with a read checking previous to the cloning process, so currently the software is not reliable, I hope you correct this issue. My software is Edition: Workstation Edition (64-bit) Software 7.2 Build No.: 4473 Operative System: Windows 7 Home Premium - Service Pack 1
|
By jphughan - 21 October 2019 9:53 PM
I believe there's a way to tell Reflect to ignore bad sectors during a clone job, but even with that capability (and ignoring the fact that having to do this would suggest an underlying problem with your source), a strategy of performing a clone on a regular basis should absolutely not be considered a legitimate backup strategy. Suppose your source disk completely died in the middle of a clone job. At that point you would be left with no source data and no valid destination data either, because a clone that fails partway through does not leave you with a destination in a usable state. And neither an option to ignore bad sectors during the clone nor an option to read the entire clone source before writing to the destination would protect you from this risk. Clones are certainly convenient to have, but they are not especially resilient precisely BECAUSE you have to put the destination into an unusable state for the duration of the clone, which means that during that time, you've only got one good copy of your data and you just have to hope that the clone completes so that you have a "backup" again. That's not a great position to be in for crucial data. By comparison, an actual backup strategy does not require destroying your existing backup in order to create a new one, so if the new backup fails, you'll still have previous backups available.
I suppose Macrium could still offer an option to perform a test read of all data that will be cloned over before actually writing anything to the destination. However, I haven't personally seen that implemented in any clone tools I've used, I don't see an option like that being very popular at all due to the amount of time it would add to clone jobs, and you would still be vulnerable to clone job failures and a resulting unusable destination.
|
By segnohh - 21 October 2019 11:22 PM
+xI believe there's a way to tell Reflect to ignore bad sectors during a clone job, but even with that capability (and ignoring the fact that having to do this would suggest an underlying problem with your source), a strategy of performing a clone on a regular basis should absolutely not be considered a legitimate backup strategy. Suppose your source disk completely died in the middle of a clone job. At that point you would be left with no source data and no valid destination data either, because a clone that fails partway through does not leave you with a destination in a usable state. And neither an option to ignore bad sectors during the clone nor an option to read the entire clone source before writing to the destination would protect you from this risk. Clones are certainly convenient to have, but they are not especially resilient precisely BECAUSE you have to put the destination into an unusable state for the duration of the clone, which means that during that time, you've only got one good copy of your data and you just have to hope that the clone completes so that you have a "backup" again. That's not a great position to be in for crucial data. By comparison, an actual backup strategy does not require destroying your existing backup in order to create a new one, so if the new backup fails, you'll still have previous backups available. I suppose Macrium could still offer an option to perform a test read of all data that will be cloned over before actually writing anything to the destination. However, I haven't personally seen that implemented in any clone tools I've used, I don't see an option like that being very popular at all due to the amount of time it would add to clone jobs, and you would still be vulnerable to clone job failures and a resulting unusable destination. Hi jphughan,
Thank you for your reply. I agree, to clone a disk is a risky strategy as a backup, because of any fail during the cloning job, I prefer this method due to I have a disk which is an exactly copy of the original and I only exchange the disk to continue working if there is a problem on the original. Usually, I have two disk as 'backup' and I clone over both disk, if a issue happen on a cloning job I have other copy, and the cloning job with the delta option takes around 25min (with a disk of around 500Gb). On the other hand, a read error would not have to be a catastrophic issue, using Linux commands you do not have that problem in a cloning job, you obtain the cloned disk and the bad sector or any issue is avoided, but you have to completely clone the disk and it takes a lot of time. By the way, is the same problem if you use a complete cloning job with Macrium Reflect.
Regards.
|
By dbminter - 21 October 2019 11:47 PM
Wait, so how does the clone operation work in Reflect? I've never used it before. Does it copy the source sector by sector and delete each sector after it's "copied" over? I don't get how it works given the descriptions here. Sounds to me like a better option is to image what you want to clone so it's on some kind safely removable media like a USB HDD and then restore those images over to the target. That way, you've got untouched source material to fall back on. But, I'm probably missing something vital in how Reflect's clone operation works.
|
By segnohh - 22 October 2019 12:10 AM
+xWait, so how does the clone operation work in Reflect? I've never used it before. Does it copy the source sector by sector and delete each sector after it's "copied" over? I don't get how it works given the descriptions here. Sounds to me like a better option is to image what you want to clone so it's on some kind safely removable media like a USB HDD and then restore those images over to the target. That way, you've got untouched source material to fall back on. But, I'm probably missing something vital in how Reflect's clone operation works. Well, I don't know how internally work the cloning process in Reflect. I report this issue that I had a couple of times, I did not have similar problems using cloning routines in Linux as example.
|
By jphughan - 22 October 2019 1:53 AM
+xWait, so how does the clone operation work in Reflect? I've never used it before. Does it copy the source sector by sector and delete each sector after it's "copied" over? I don't get how it works given the descriptions here. Sounds to me like a better option is to image what you want to clone so it's on some kind safely removable media like a USB HDD and then restore those images over to the target. That way, you've got untouched source material to fall back on. But, I'm probably missing something vital in how Reflect's clone operation works. Nothing gets deleted from the source during a clone job. But if Reflect encounters an error reading from source during the clone operation, then the job fails -- and at that point your destination is only partially cloned, which means it's unusable. It's not like a file copy operation where any files that were copied over before the failure will be available on the destination. The clone job needs to finish in order for the destination to be in a valid state.
|
By dbminter - 22 October 2019 2:07 AM
Okay, I misread. I was getting the impression the source was unusable if a clone operation failed.
|
|