Testing disk write speed on thumb drives - a good idea, or not?


Author
Message
Al Nonymous
Al Nonymous
New Member
New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)
Group: Forum Members
Posts: 6, Visits: 15

Every time I run a File/Folder backup to my uSB 3.0 thumb drive, Macrium Reflect first quickly creates "Volume Snapshots", but then it spends about five long minutes "Testing disk write speed".

I believe that the number of sector writes available to a thumb drive (andSSDs) are finite. That's one reason some cite for discouraging disk defragmentation on those drives.

If that's correct, is Reflect needlessly eating away at the finite lifespan of the drive as it "tests disk write speed"?

Can the user make Reflect skip this (life-shortening?) step for these types of drives, and just have it start backing up as soon as the "Volume Snapshots" are completed??


jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)
Group: Forum Members
Posts: 4.4K, Visits: 33K
The disk write performance test should only occur once per device, although if the drive letter changes, that might trigger another test.  But if you want to manage this yourself, go to the Backup tab (up in the menu bar, not the one in the upper-left corner of the interface) and select "Disk Write Performance".  You'll see any results from known devices, and if you want to choose a specific method for any other device, you can add it and choose your desired method, in which case Reflect will just use that rather than running a test to see which is faster.  One of the reasons the test exists is because some third-party anti-virus solutions have been found to interfere with one method or the other, i.e. causing very slow performance with one mechanism while perhaps allowing the other to proceed as normal.

To the larger question, it's true that flash media has a finite lifespan and that it's based primarily on writes, but that's unlikely to be encountered in the actual usage period of consumer products.  You might find this article interesting -- granted it was written about SSDs that are presumably designed to a much higher standard than your typical flash drive, but the takeaway is that you are unlikely to have a flash drive failure due to excessive writes.  And even if you did, it would be the result of all the writes from the backups themselves.  The amount of data written during the occasional write performance test is negligible by comparison.

Edited 17 November 2018 12:52 AM by jphughan
Al Nonymous
Al Nonymous
New Member
New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)New Member (12 reputation)
Group: Forum Members
Posts: 6, Visits: 15
Good information! Many Thanks for the clarifications.
Froggie
Froggie
Master
Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)
Group: Forum Members
Posts: 956, Visits: 8.3K
A slight disagreement with JP here.  Indeed, SSDs last a loooong time.  The reason is due to their internal GARBAGE COLLECTION mgmt built into their controllers AND full support (TRIM) bult into the Operating Systems that use them.  Neither of these features are built into USB Flash versions (UFDs) of this technology except possibly Garbage Collection into some of the newer SSD-based USB devices... and I wouldn't count on that unless specified.  TRIM, as  far as I know, is NOT SUPPORTED over USB connections.  This may change (maybe with USB-C) over time but is currently not specced in the USB transfer protocol.  Once it is, newer devices will have to incl. controller functions to handle TRIM and eventually Garbage Collection.

As a result, the main problem with UFDs is not so much failure (their rate is probably higher than SSDs due to ower quality NAND), it's SPEED... which lowers dramatically over time.  Without proper Garbage Collection, lots of old DATA is carried around within the device causing it to eventually use its READ/MODIFY/WRITE operations (very slow) to update DATA rather than CLEAR/WRITE ops which are much quicker.

I have some UFDs that have slowed down so much I can't get more than 3-4mB/s out of them (originally appx. 35mB/s) whch makes them, basically, 3-1/2" floppies Smile .  Their READ speeds should remain fairly constant but the WRITE speeds should drop dramatically over time if used for a lot of DATA transfers.

Edited 17 November 2018 1:51 PM by Froggie
jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)
Group: Forum Members
Posts: 4.4K, Visits: 33K
I don’t see that we’re disagreeing here, Froggie. The OP asked if the write speed test was a problem from a wear perspective. Your post pointed out that flash drives are slow because they don’t always employ techniques to clear unused cells like SSDs do. Wear concerns vs. performance is more of a completely different topic than a disagreement, especially since I agree with almost everything you posted. Smile The only things I’d point out are:

- If all else were equal, a flash memory device that did NOT employ garbage collection or TRIM would last LONGER, since the erasure cycle itself creates wear. But it would of course also be slower over that extended lifespan.

- USB-C wouldn’t affect the availability of TRIM because USB-C still uses regular USB protocols; it’s just a different physical connector that can optionally also support some other technologies like USB-PD and DisplayPort. However, TRIM might be available if the host and device support UASP, which can be used on regular USB-A connectors. I haven’t checked.

UPDATE: StarTech has a blog post about UASP and they specifically address TRIM support at the bottom, including a link to their products that support it, so it’s definitely possible: https://blog.startech.com/post/all-you-need-to-know-about-uasp/
Edited 17 November 2018 3:00 PM by jphughan
Froggie
Froggie
Master
Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)Master (1.7K reputation)
Group: Forum Members
Posts: 956, Visits: 8.3K
jphughan - 17 November 2018 2:43 PM
However, TRIM might be available if the host and device support UASP, which can be used on regular USB-A connectors. I haven’t checked.

What you mention is considerable.  The host requires UASP to allow for the necessary SCSI function to leave the System (in this case, the SCSI UNMAP command... SCSI counterpart to TRIM), and either the bridge device (Dock) or the SSD/UFD's on-board controller needs to support that SCSI command translation (UFD with direct UASP/SCSI support or SSD with SATA controller's ability to translate UASP/SCSI to ATA/TRIM).

As you mention with Stardock (thanks for the link), options are slim at the moment... this whole high-speed external FLASH environment will be a bit slow to develop methinks.

Edited 17 November 2018 3:25 PM by Froggie
dbminter
dbminter
Expert
Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)
Group: Forum Members
Posts: 652, Visits: 5.3K
One thing the OP might consider that they hadn't thought of is to do what I do when I want to use flash drives for storing Reflect backups.  I write the backups to HD (SSD, SATA mechanical, or USB HDD) first and then copy over the image files to the flash drive.  Of course, this does take longer to do the backup, but in the OP's original concerns, if the backup fails for whatever reason, then you won't have wasted any of your finite number of writes on a flash drive for a backup that didn't succeed.  And you may want to manually issue Verifies on the files copied over to the flash drive.  This will be a decent indicator if the files copied over correctly.  Again, it increases the whole backup time, but it's what I do.  I don't mind the extra time to take into account possible backup failures and to make sure the flash contents were copied properly.

jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)
Group: Forum Members
Posts: 4.4K, Visits: 33K
If you want to verify an accurate file copy, running a Reflect verify on the new copy isn’t the best way. It is theoretically possible, albeit hugely unlikely, for a file copy error (or combination of errors) to result in a copy that doesn’t match the original but still verifies “internally”, e.g. by having both a data block and its checksum changed in the “right” way. The better way to check a copy is to compare hashes of the original and new copies, or better yet use a file copy application that can do this automatically as part of the copy operation itself. Hashing the entire file may also be faster than Reflect’s verification routine, although I haven’t confirmed this.

The purpose of Reflect’s verification is to confirm that all data in the backup is readable and not corrupt, and it is superior to a hash comparison for establishing the latter, because although a hash mismatch would indicate that your new copy is bad, a MATCH could simply mean that you have made an accurate copy of a backup that was already corrupt.
Edited 21 November 2018 6:17 PM by jphughan
dbminter
dbminter
Expert
Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)Expert (866 reputation)
Group: Forum Members
Posts: 652, Visits: 5.3K
I suppose one could also open a Command Prompt and do a COMP against the source and copied files, too.  That would take longer than a Verify, but would be a better indicator of properly copied contents.  And, as you say, doing a hash comparison is better than a Verify.  I've never had a copy that passed Verify fail to be readable except in one instance.  It was weird.  Verify passed under the Windows instance of Reflect, even when performed manually, but failed under the WinPE instance of Reflect.  It was copied to BD-RE and that drive was starting to die off, saying it was writing contents but when you reloaded the disc contents, they weren't actually there.


Do you have a recommendation for a program that does such copies while also creating and performing a hash comparison?

jphughan
jphughan
Macrium Evangelist
Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)Macrium Evangelist (6.6K reputation)
Group: Forum Members
Posts: 4.4K, Visits: 33K
I don’t have a recommendation from personal experience. I have hash calculator tools, folder comparison tools, and file and folder copy/replication tools, but not a file copy application with “built-in” verification. I’ve just never felt the need to do that on any sort of a regular basis. However, a quick Google search turns up several recommendations of a tool called XXcopy, an expanded version of Microsot’s own (now deprecated?) Xcopy tool. It looks like the company behind XXcopy is no longer operating, but the application itself is still available for download: http://www.xxcopy.com/xcpydnld.htm
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Similar Topics

Reading This Topic

Login

Explore
Messages
Mentions
Search