SSD suitable for image storage


Author
Message
PB2017
PB2017
Junior Member
Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)
Group: Forum Members
Posts: 35, Visits: 228
Hi

I use MR on a pc with an SSD.

Currently I backup to a 3.5 1Tb drive.

I’m toying with getting a second SSD for the MR images in an attempt to speed up backup and restore.

I know this would work but is it recommended? I’ve been told before that SSD are best avoided for long term storage.

I usually make a full back up once a month then incremental forever every day. Sometimes 4-5 a day.

Regards
jphughan
jphughan
Most Valuable Professional
Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)
Group: Forum Members
Posts: 3K, Visits: 21K
Even setting aside the cost per GB concern that may limit the number of backups you may want to retain, SSDs suffer from write fatigue, so they're not ideal for write-intensive workloads.  At the enterprise-grade level, there are models designated for write-intensive workloads (as opposed to models designated for mixed use or read-intensive workloads), but those are quite expensive.  That said, modern SSDs can write 100TB+ worth of data before failing, so part of your risk factor will depend on the size of your backups, although you should double that figure if you'll be using Incrementals Forever because that means you'll have write I/O from the consolidation operation as well.  An SSD will definitely make your backups faster, but you may experience failure sooner than you were expecting, so I would definitely consider either a disk rotation or periodic replication of your backup destination to some other device, although in fairness I would recommend that no matter what, especially when using an Incrementals Forever strategy because that strategy means you only ever have one backup set, which in turn means that if there's ever a problem in that one Full, all of your backups become useless if you don't have another copy elsewhere, or better yet a completely independent set in order to avoid the risk of copying one corrupted backup to another location

Edited 23 March 2018 4:46 PM by jphughan
Froggie
Froggie
Guru
Guru (1.5K reputation)Guru (1.5K reputation)Guru (1.5K reputation)Guru (1.5K reputation)Guru (1.5K reputation)Guru (1.5K reputation)Guru (1.5K reputation)Guru (1.5K reputation)Guru (1.5K reputation)
Group: Forum Members
Posts: 826, Visits: 6.9K
jphughan - 23 March 2018 2:07 PM
That said, modern SSDs can write 100TB+ worth of data before failing, so part of your risk factor will depend on the size of your backups...

A little perspective to JP's statement above... having followed THIS EXPERIMENT since its inception, JP's number above will be more like 750tB for the worst commercial SSD available, most others are over a petabyte.
jphughan
jphughan
Most Valuable Professional
Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)
Group: Forum Members
Posts: 3K, Visits: 21K
I don’t know if total failure is the ideal sizing metric when talking about a disk that stores your backups; the onset of irrecoverable errors would be cause for concern (read: replacement) in my book. But I grant that even that figure is higher than 100TB in those tests. I do wonder whether SSD longevity has been going up or down since those tests, however. Accurately reading the value of a cell gets more difficult as wear sets in, and the more bits you store in a cell, the more possible values it can have that need to be reliably differentiated from each other. The first SSDs were SLC with a single bit per cell, but the relentless push for higher capacity and lower cost per GB has given rise to MLC, then TLC, and I seem to remember reading some SSDs are now even storing four bits per cell.
PB2017
PB2017
Junior Member
Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)
Group: Forum Members
Posts: 35, Visits: 228
That's an interesting looking article which I will read in detail another time.

My Samsung 850 EVO has done 13 Tb writes in 2.5 years so looking at 5.2 TB per year. At that rate it'll probably last a good few years yet.

I have documents backed up to the "cloud" and an offline drive and verify is on after all backups in MR. 

With 20+ years of computer experience I've learnt not to reply on anything. Dead HDD, USB drives, GFX cards, RAM etc etc have all caused me many issues over the years.
S​​o I'm not expecting any hardware to be flawless - my 1Tb WD HDD backup drive died just out of warranty and WD still replaced the drive. Fortunately I recovered everything from other backups.

I'm happy reinstalling Windows if and when I need to and looking at many of the disadvantages listed above the same can be said for spinning HDD (my SSD has lasted longer than the traditional HDD installed at the same time) so really the only consideration I have is the cost per Gb.

​​​I was under the impression that were was a major no-no with storing data long term on an SSD but then if you think about it many Windows files are stored long term on a SSD when used as a boot device so i'm not sure if there is a real disadvantage or I've really missed something..
​​​​​​
jphughan
jphughan
Most Valuable Professional
Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)Most Valuable Professional (4.3K reputation)
Group: Forum Members
Posts: 3K, Visits: 21K
If memory serves (and it may not here!), the danger with long-term storage on SSDs is long-term offline storage, because the stored charge in a cell can fade over time if it is not periodically reinforced by the drive having power.  Of course the magnetic charge on a spinning disk can also fade, but I believe that has a much longer shelf life.  But similar to the write fatigue issue described above, this shelf life difference may be purely academic unless you're planning for very long-term archival storage.  And at that point, you'd have to consider that the SSD's SATA/M.2 connector itself may be obsolete by the time you want to read it again (reading IDE and especially pre-IDE hard drives now in 2018 would be a bit challenging, for example) and the even greater reality that digital files themselves are not a great long-term storage solution, regardless of the physical media.  File formats change, applications that read them get abandoned, eventually PCs don't support the last version of that application, etc.  For example, even if you found a way to read your archival hard drive on a PC far off in the future, will Reflect still be around then, or will whatever PC you're using at that point be able to run a 2018 version of Reflect?  This is all why film studios for example still prefer saving content to actual physical film rather than storing only digital files on hard drives somewhere.  For example, even if we'd had PCs back in the early days of film, can you imagine trying to read a file generated from a 1930s-era PC and software now in 2018 if the studio wanted to restore it?  That's much less of an issue if you have an actual print.  I remember watching some special feature item somewhere mentioning that Pixar has already endured major headaches resurrecting some of their very early projects on their current systems when they wanted to release their Short Films Collection, retrieve older assets for new projects, etc. -- and their oldest projects are still relatively new on an archival scale!

Edited 23 March 2018 6:49 PM by jphughan
PB2017
PB2017
Junior Member
Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)Junior Member (65 reputation)
Group: Forum Members
Posts: 35, Visits: 228
Well I have to admit that you raise some good points there but I think if I consider all that I'm looking too far ahead.

I have long term storage sorted and multiple backups for data.

​​TBH I use MR for system stability.
Make an image, install updates or software or make a change. Don't like it? Restore.

I'm not using MR as an archive system.​​

Really the issue now comes down to £ per Gb.

Thanks for all the input​​​​​​​​
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Similar Topics

Reading This Topic

Login

Explore
Messages
Mentions
Search