I've used a Seagate Momentus XT for some time. Throughput didn't change much (that was a hardware bottleneck on my laptop, ymmv), but access times in daily use went down noticeably, which was really nice. Are they worth it? Well, depends on how much you've got to spend. I'd rather go full SSD, but if you're low on cash or need the extra space, go for it.
If you value your data, use a traditional hard drive. If you want performance, use an SSD. Ultimately, you should use an SSD as your Windows drive and traditional for storing important documents and other data. Hybrids cache your most used files, so they'll bottleneck when it comes to new or infrequently used files. They're a complete waste of money.
They'll be as slow as a normal hard drive when acessing new files - but only for the first few times. They'll always be slow for infrequently used files, yes. Waste of money - depends. I got mine used, for not much more than a non-hybrid HDD of comparable size, and I wanted to upgrade that HD anyways. I didn't feel like the small price hike was a waste. edit: Data security might be a point. If the flash controller doesn't recognize bad sectors reliably, for the cached sectors, the drive will hand you garbage. But you can still transplant the platters (or rather, pay someone to do so) into a non-hybrid drive and read everything back just fine. Not much of a difference to normal HDDs imho. (Or you could try to be clever, and read back seldomly used files - which are uncached, therefore fine - which then end up in the (bad) cache, enabling you to then get the frequently-used ones.)
Well I'm using it for an emulation arcade machine which is going to stay as Is , so no moving files once moved to the SSHD. Not as a work computer. So the buffering will help. I should have mentioned that earlier but at least I was able to get helpful info for future reference.
I have a 500GB Momentus XT in my MacBook Pro, along with a 120GB Intel SSD. I bought the Momentus XT first and ended up being unsatisfied with it and sprung for the SSD. It wasn't really that noticeably faster.
With respect, I have to disagree with this statement. If you value SPACE get a HDD; yet, if you value your DATA get a SSD. When a SSD dies, you will still have read access just not write access. If a physical disc or eye were to wear down, your data will become inaccessible. Even if the controller dies in a SSD, you could still dump the data off of the flash chip. A common misconception about SSD is that they wear down at a quick rate like they did 2 generations ago. Luckily this isn't the case now and I have estimated about 7 years left of drive life for my 120GB SSD. They don't die instantly either, they will slowly decrease write speed over time so you know when time is almost up. This of course opposed to HDD's that up and die on you is much more reliable.
Only 7 years? I have normal har drives almost 15 years old that still work. I would have thought a SSD would last much longer since there are no moving parts.
Yeah but there's that whole maximum read/write thing. I know that they've improved the issue with various software techniques (Trim for example) but I don't know how much they've managed to improve it.
One of the newer 1TB drives can last for 30 years if it has 4GB or more written to it every hour 24/7. I think anyways. Something like that. I just know that average use for the newer drives measures in decades now.
I'll just say that they're good for one thing- selling a used laptop and making it sound much more impressive than it actually is. Dual drive it. Or, if its an option, dual drive it and have a cache mSATA SSD for the hard drive. That's my setup for my laptop and I love it. Looking at that limit realistically, you'll have gotten tons of hours out of a good SSD well before the drive is caput. I really hate how people get scared of it. Its a massive amount of data that needs to be written on it daily- well under what a normal person does.
I would suggest getting a smaller SSD for your OS and important programs and a large HDD for your files. This should provide a nice performance boost yet keep cost low.
As it happens, I was talking to one of the infrastructure guys at work about this a week or so ago - his experience might not map directly to desktops, since these are 24/7 enterprise deployments, but they were interesting. Most of the mechanical drives they are using are HGST Ultrastars (largely 2TB ones), and the SSDs are the Intel enterprise drives - both of these are considered highly reliable products. His comment was that overall the SSDs were about twice as reliable as the mechanical drives, but that they suffered from a far higher level of surprise failure. With the mechanical drives, you got advance warning about 95% time - either because the error rate started increasing or the SMART tripped or both. With the SSDs, they just abruptly stopped working at all - and most of the time wouldn't even identify.
I worked in the same industry. The surprise failure is a controller dying, technically the data is still on the nand chips. Nand fails gracefully, controllers not so much. But they are improving all the time
Good point. With a HDD you'd simply swap the controller board if it dies. Does that work for SSDs? How tightly integrated are those controllers into the whole disk? Are they usually SMD or BGA nowadays? I.e. is it possible (for normal people with good, but not "data recovery lab"-level, soldering skills & equipment) to replace the broken controller, or not? (Either way, while all this data recovery talk is interesting, it kinda misses the point tbh. EVERYONE: MAKE REGULAR BACKUPS! It's simple, and it makes all this a moot point.)
I think the main reason they don't like the surprise failures is that they generally tried to schedule drive rebuilds during periods of low system load - which is obviously something you can't do if the drive just suddenly dies. Most of the SSDs are being used on database servers - the data array is using mechanical drives (typically in RAID60) and the log is on the SSDs (in RAID 10) - the problem is that if you had a drive drop out on the log array and the system is heavily loaded then the extra IO load from the rebuild slows everything down - which normally results in the log starting to grow since the transactions are not getting written out, and that slows everything down even more. They ended up tuning it so that they could reserve 25% of the I/O bandwidth for a possible rebuild without the log growth getting out of hand, but this obviously has a performance cost. Overall, there still a significant performance win, but it's somewhat eroded by the requirement to reserve performance for rebuilds - he also said that so far out of all the failures he has seen on the SSDs there hadn't been a single one that looked like wearout - either the drive just stopped responding (I.E. controller failure) or sections of it became inacessible (I.E. NAND failure). They do have a 3-year replacement schedule, though, so perhaps they just never got to that part of the life curve.