Myths about SSDs that need to be dispelled. How long does an SSD drive last?

Is it safe to store files on an SSD?

Let's start with the background. SSD drives came onto the scene at the moment when Intel introduced the new Nehalem processor architecture and at the same time announced that the bottleneck in new PCs is no longer processors, but hard drives, the performance of which, in fact, is practically progressed. At the 2008 IDF (Intel Developer Forum) in San Francisco, Intel showed the first solid-state drives and pointed out the reasons why conventional hard drives reduce system performance with the new Core i7 processor. Three years later, numerous tests of commercial SSDs have confirmed that solid-state drives really unlock the potential of new processors, significantly increasing system performance.

But performance is far from the only indicator for a storage device. When it comes to your data, even the fastest drive in the world is worthless if you can't trust that it can store your information reliably.

This topic is even more relevant now, in connection with the massive transition to the 25 nm process technology. A thinner technical process implies a reduction in the cost of producing NAND memory, so the trend is natural, and even at 25 nm cells the process will not stop.

Over the past two years, Intel has twice moved to a thinner NAND memory process for SSD drives: from 34 nm to 25 nm and from 25 nm to 20 nm

At the same time, it is becoming increasingly difficult for engineers to overcome problems with memory produced using 25 nm technology. But current buyers can still expect better performance and reliability from the new SSDs than the previous generation. The reduction in the number of cell rewrite cycles, due to the transition to a more sophisticated technical process, has to be compensated somehow.

SSD type Guaranteed number of rewrite cycles Total TB written (JEDEC formula) Storage resource (10 GB/day, WA = 1.75)
25 nm, 80 GB 3000 68.5 TB 18.7 years
25 nm, 160 GB 3000 137.1 TB 37.5 years
34 nm, 80 GB 5000 114.2 TB 31.3 years
34 nm, 160 GB 5000 228.5 TB 62.6 years

This way, you don't have to worry about the number of write cycles your SSD can withstand. For the previous generation of SSDs that used 34nm NAND memory, the guaranteed number of write cycles was 5000. In other words, you can write and erase a NAND cell 5000 times before it begins to lose its ability to store data . Based on the fact that the average user writes a maximum of 10 GB per day, it will take approximately 31 years for the disk to become unusable.

For the new generation of SSDs with 25nm memory, the disk life expectancy is about 18 years. Of course, here we greatly simplify the real state of affairs. SSD-specific issues such as write amplification, data compression, and garbage collection can affect actual results. However, it is clear that there is no good reason to immediately start counting down the clock until the end of the SSD drive after purchasing it.

On the other hand, we know for sure that some SSD drives have already become unusable. You can easily verify this by studying this issue on forums or in reviews of online stores. But the problem in this case is not the exhaustion of the cell resource. As a rule, a firmware error leads to disk failure. We know of cases where manufacturers strongly recommend flashing a new drive, which increases reliability and sometimes significantly improves the performance of the drive.

Another reason for SSD failure is related to the electronic content. The capacitor or memory chip may become unusable, causing the drive to fail. Of course, we expect fewer such problems compared to conventional HDDs, which have moving parts that inevitably fail after a certain time.

But is it true that the absence of moving parts makes a solid-state drive more reliable than a platter drive? This question worries an increasing number of computer enthusiasts and IT specialists. It was he who forced us to analyze the real reliability of SSDs in order to separate facts from fiction.

What do we know about storage devices?

SSDs are a relatively new technology (at least compared to hard drives, which are approaching 60 years old). Thus, we have to compare a new type of drive with time-tested technology.

But what do we really know about the reliability of conventional hard drives? Two important academic studies shed light on this question.

In 2007, Google released a reliability study of 100,000 consumer-grade PATA and SATA drives used in Google data centers.

Around the same time, Dr. Bianca Schroeder, together with expert Dr. Garth Gibson, calculated the replacement rate of more than 100,000 drives that were used in one of the largest national laboratories in the United States.

The only difference between these two studies is that in the second case, the study involved drives with SCSI and Fiber Channel interfaces, and not just PATA and SATA.

For those who want to get acquainted with the results of academic research in more detail, we advise you to read at least the second one - in 2007, this analytical report was recognized as the best at the File and Storage Technologies (FAST ’07) conference in the USA. If reading such sources is not part of your plans, we present here the key points that directly affect the issue of interest to us.

Mean Time to Failure (MTTF)

When it comes to measuring the reliability of a drive, we can remember two indicators: mean time between failures (MTBF - Mean Time Between Failures), which means the average time between failures, as well as mean time to failure (MTTF - Mean Time To Failure ), the key difference of which is the assumption that after a failure the system cannot be restored.

Here's what Wikipedia writes about this:

In English, the term MTBF (Mean Time Between Failures) is used - the mean time between failures or time between failures, as well as MTTF (Mean Time To Failure) - the mean time to failure. It should be noted, however, that published MTBF/MTTF values ​​are often based on the results of accelerated testing - for a limited time, allowing the identification of predominantly the proportion of manufacturing defects. In this case, the declared MTBF value speaks not so much about reliability itself, and especially not about durability, but about the percentage of rejected products. For example, an MTBF of the order of 1 million/hour for a hard drive obviously does not mean 114 years of continuous trouble-free operation - and not only because an experiment of such duration could not be carried out, but also because the manufacturer himself does not assign a resource (service life) more than 5-10 years and a warranty period of 1-5 years.

Let's take as an example the Seagate Barracuda 7200.7 drive, which has a stated MTBF of 600,000 hours.

In any large sample of drives, half of those drives will fail within the first 600,000 hours of operation. Since HDD failure statistics in a large sample are distributed relatively evenly, we should expect, for example, that one disk will fail every hour. With this MTBF value, you can calculate the Annualized Failure Rate (AFR), which will be 1.44%.

But research by Google and Dr. Bianca Schroeder revealed completely different indicators. The fact is that the number of failed drives does not always correspond to the number of disks that needed to be replaced. That's why Schroeder didn't measure drive failure rate (AFR), but drive replacement rate (ARR). The ARR rating is based on the actual number of drives replaced according to service logs:

While datasheet AFR values ​​range from 0.58% to 0.88%, observed ARR drive replacement rates range from 0.5% to 13.5%. Thus, the observed ARR, depending on the drive configuration and type, can be up to 15 times higher than the AFR values ​​according to the datasheets.

Hard drive manufacturers define the number of failures differently than we do, and therefore it is not surprising that the data they provide does not correspond to the actual reliability of the drives. Usually the MTBF rating is determined based on accelerated testing, information about the return of hard drives, or by testing selected drives. Data on the return of drives is very dubious information. According to Google, "We've encountered...situations where drive testing green-lit drives that inevitably failed in practice."

HDD failure statistics over time

Most users believe that the HDD failure curve graph is shaped like a bathtub. At first, we expect that many disks fail due to the so-called “childhood disease,” that is, various kinds of factory defects and the “break-in” process itself. Then, after the initial period, the disk failure rate should be minimal. Finally, at the end of the expected service life, the HDD failure curve inevitably creeps up, since the drive parts have a certain resource. A similar line of thought, which seems quite logical, is reflected in the following graph.

But this graph does not correspond to the real state of affairs. Research by Google and Dr. Bianca Schroeder has shown that HDD failures have been steadily increasing over time.

Reliability of Enterprise-class disks

When comparing the two studies, one can imagine that the 1,000,000 MTBF for the Cheetah drive is much closer to the MTBF of 300,000 hours stated in the datasheet. This means that consumer and enterprise-class drives have approximately the same annual failure rate, especially when comparing drives of approximately the same capacity. According to NetApp technical planning director Val Bercovici, "...the way disk arrays handle corresponding hard drive failures continues to create the consumer perception that more expensive drives should be more reliable. One of the industry's dirty secrets is that most Enterprise drives are from the same components as consumer-class drives. However, their external interfaces (FC, SCSI, SAS and SATA) and, more importantly, specific firmware features have the greatest impact on the behavior of consumer and enterprise-class drives in real life. conditions".

Data Security and RAID

Schroeder's research covers Enterprise-class drives used in large RAID arrays of one of the largest high-performance computing laboratories. Typically, we expect that storing data in RAID configurations provides a higher level of security, but the Schroeder report found something surprising.

The distribution of time between disk replacements shows a decrease in the failure rate, which means that the expected period of time until the next disk replacement has gradually increased since the previous disk was replaced.

Thus, the failure of one drive in the array increases the likelihood of failure of another drive. The more time has passed since the last disk replacement, the longer it will take before another one is replaced. Of course, this has implications in terms of RAID reconstruction. After the first failure, you are four times more likely to experience another disk failure within the same hour. Within 10 hours, the probability of disk failure only doubles.

Temperature


Another unexpected conclusion can be drawn from the Google report. The researchers took temperature readings from SMART (Self-Monitoring, Analysis and Reporting Technology), a technology that most hard drives support. And they found that higher drive temperatures did not correlate in any way with a higher failure rate. Apparently, temperature affects the reliability of older drives, but even in this case the effect is not so significant.

Is SMART technology really smart?

SMART means “smart” in English, but does this hard drive health monitoring technology really cope with its function? The short answer is no. SMART technology was designed to report disk errors early enough to allow you to back up your data. However, according to a Google report, more than a third of failed drives did not enable SMART alarm mode.

This fact is not particularly surprising, since many experts have suspected something similar for years. In fact, SMART technology is optimized for detecting mechanical problems, while the bulk of the hard drive's functionality is provided by the electronics. This is why incorrect HDD operation and unexpected problems, such as sudden power outages, remain invisible to SMART until errors related to data integrity occur. If you rely on SMART to notify you of impending drive failure, you still need to provide an additional layer of protection if you want to ensure your data is safe.

Now let's see how SSD drives behave in confrontation with hard drives.

Briefly about SSD reliability

Unfortunately, none of the hard drive manufacturers publish return data, but the same applies to SSD manufacturers. However, in December 2010, the Hardware.fr website presented a report on HDD failure rates obtained from the parent company LDLC, one of the leaders in computer retail in France. The website had the following commentary regarding how they calculated this indicator:

The return frequency covers drives sold between October 1, 2009 and April 1, 2010, which were returned before October 2010, i.e. the operating period ranged from 6 months to a year. Statistics by manufacturer are based on a minimum sample of 500 copies, and by model - on a minimum sample of 100 copies.

As you can understand, we are not talking about the failure rate, but about the number of returns. Perhaps the language barrier is responsible for how English-language IT publications interpreted this fact. Sites like Mac Observer and ZDNet have not properly labeled this data as "bounce rate", likely based on Google's automatic translation.

Disc models Return statistics
Hitachi Deskstar 7K1000.B 5,76%
Hitachi Deskstar 7K1000.C 5,20%
Seagate Barracuda 7200.11 3,68%
Samsung SpinPoint F1 3,37%
Seagate Barracuda 7200.12 2,51%
WD Caviar Green WD10EARS 2,37%
Seagate Barracuda LP 2,10%
Samsung SpinPoint F3 1,57%
WD Caviar Green WD10EADS 1,55%
WD Caviar Black WD1001FALS 1,35%
Maxtor DiamondMax 23 1,24%
WD Caviar Black WD2001FASS 9,71%
Hitachi Deskstar 7K2000 6,87%
WD Caviar Green WD20EARS 4,83%
Seagate Barracuda LP 4,35%
Samsung EcoGreen F3 4,17%
WD Caviar Green WD20EADS 2,90%
SSD drives
Intel 0,59%
Corsair 2,17%
Crucial 2,25%
Kingston 2,39%
OCZ 2,93%

1 TB hard drives
Disc models Return statistics
Samsung SpinPoint F1 5,20%
WD Caviar Green (WD10EADS) 4,80%
Hitachi Deskstar 7K1000.C 4,40%
Seagate Barracuda LP 4,10%
WD Caviar RE3 WD1002FBYS 2,90%
Seagate Barracuda 7200.12 2,20%
WD Caviar Black WD1002FAEX 1,50%
Samsung SpinPoint F3 1,40%
WD Caviar Black WD1001FALS 1,30%
WD Caviar Blue WD10EALS 1,30%
WD Caviar Green WD10EARS 1,20%
2 TB hard drives
Hitachi Deskstar 7K2000 5,70%
WD Caviar Green WD20EADS 3,70%
Seagate Barracuda LP 3,70%
WD Caviar Black WD2001FALS 3,00%
WD Caviar Green WD20EARS 2,60%
WD Caviar RE4-GP WD2002FYPS 1,60%
Samsung EcoGreen F3 1,40%
SSD drives
Intel 0,30%
Kingston 1,20%
Crucial 1,90%
Corsair 2,70%
OCZ 3,50%

A disk failure means that the device is no longer functional. But a return may have many reasons. This creates a certain problem, because we do not have any additional information on the reasons for the return of the drives: they could have been dead when they arrived at the store, broken during their service life, or there was simply some incompatibility with the hardware that prevented the buyer from using the drive.

Sales between 10.1.2009 and 4.1.2010, returns until 10.1.2010
Top 3 SSD return leaders Return statistics Top 3 HDD recovery leaders Return statistics
OCZ Vertex 2 90 GB 2,80% 8,62%
OCZ Agility 2 120 GB 2,66% Samsung SpinPoint F1 1 TB 4,48%
OCZ Agility 2 90 GB 1,83% Hitachi Deskstar 7K2000 3,41%
Sales between 4.1.2010 and 10.1.2010, returns until 4.1.2011
OCZ Agility 2 120 GB 6,70% Seagate Barracuda 7200.11 160 GB 16,00%
OCZ Agility 2 60 GB 3,70% Hitachi Deskstar 7K2000 2 TB 4,20%
OCZ Agility 2 40 GB 3,60% WD Caviar Black WD2001FASS 4,00%

This information only multiplies the number of questions. If the bulk of sales were made through an online store, then poor packaging or damage during delivery could have a significant impact on the failure statistics. Moreover, we also have no way of finding out how customers used these discs. The significant variation in failure rates only highlights this problem. For example, the return rate for the Seagate Barracuda LP increased from 2.1% to 4.1%, while for the Western Digital Caviar Green WD10EARS it fell from 2.4% to 1.2%.

Either way, this data doesn't really tell us anything about reliability. But why, in this case, are they needed at all? The only conclusion is that in France, the majority of buyers were more than satisfied with the purchase of Intel SSDs and did not return them, unlike drives from other brands. Customer satisfaction is an interesting topic, but it is much less interesting than actual failure rates. So let's continue our analysis.

Data center reviews

Cost per gigabyte continues to be a barrier preventing even large organizations from using thousands of SSDs simultaneously. But even taking into account the fact that we do not have access to full-fledged arrays of solid-state drives, this does not mean that we cannot cover the issue of SSD reliability in real conditions, based on the experience of small organizations. We decided to contact our friends working in the IT field, and received quite interesting feedback from several data centers.

NoSupportLinuxHosting.com: less than 100 SSD


Mirroring a boot partition based on two Intel X25-V SSDs

No Support Linux Hosting doesn't list the exact number of drives installed, but the company says it uses a "considerable number" of SSDs. We know that they use less than a hundred solid state drives, which are used as follows:

  • Intel X25-V 40 GB are used as mirrored boot drives for thin servers and ZFS storage servers;
  • Intel X25-M with a capacity of 160 GB are used as L2ARC cache in ZFS servers;
  • Intel X25-E 32 GB are used as mirrored ZIL volumes in ZFS servers.

All of these drives have been in use for at least one year, with some recently turning two years old. With this in mind, it should be noted that the company has not encountered a single case of SSD failure.

When we asked what the benefits of using solid state drives in servers are, we received the following answer:

In combination with ZFS and hybrid storage systems, the use of SSD drives allows for significant performance gains compared to traditional magnetic platter drives. We still use HDDs as our primary storage, so we can maintain their cost advantage while reaping the speed advantage of SSDs. Sooner or later, we plan to completely migrate our SAN servers to SSD drives. But throughout 2011 we will stick with a hybrid storage system using ZFS.

InterServer.net

InterServer uses SSDs only on database servers. Specifically, Xeon-based servers use Intel X25-E drives (SSDSA2SH032G1GN) to get the most out of the drive's high throughput. What performance values ​​are we talking about here? InterServer tells us that it has reached 4514 requests per second for the MySQL server. On an old Xeon server equipped with IDE hard drives, the number of MySQL queries per second is 200-300. We know that InterServer has been using solid-state drives since 2009 and since then there has not been a single drive failure.

So, InterServer told us the following information in the context of using SSDs:

Intel SSDs are night and day in terms of reliability when it comes to comparison with some other drives. For example, SuperTalent SSD drives have a very high failure rate, including the FTM32GL25H, FTM32G225H and FTM32GX25H models. We estimate that about two-thirds of these drives have failed since the start of service. Moreover, after failure, information from these disks was practically impossible to recover. That is, the drive simply disappeared from the system and could no longer be read. Hard drives “die” more gracefully and in most cases information from them is easy to recover. But we cannot compare them with Intel SSDs, since we have not yet encountered the failure of the latter.

Steadfast Networks: over 100 SSDs

Steadfast Networks uses about 150 Intel SSDs, making the company a slightly larger SSD user than the previous two. Models of the X25-E (32 GB and 64 GB) and X25-M (80 GB and 160 GB) line are used. Intel X25-V40 GB drives are represented in smaller quantities, as well as solid-state drives from other brands installed by the company's customers, such as OCZ Vertex 2, SuperTalent and MTron Pro. Regardless of the brand, all of these SSDs are used only in database servers or as cache.


Steadfast Networks - almost 150 SSDs in operation

In two years of using SSDs, Steadfast Networks has experienced only two drive failures requiring replacement, both of which resulted in the need to recover data from the SSD. The ability to recover data from a failed solid-state drive depends on the interaction between the controller and the firmware. The scenario described by the InterServer representative regarding the SuperTalent drives is the worst possible scenario - the data could not be recovered at all. But this case is not a general rule for SSDs.

With a large sample size, we finally found cases of SSD failures. But compared to magnetic platter drives, their percentage is still quite low. However, Steadfast Networks President Karl Zimmerman believes that this still understates the benefits of SSDs and explains it as follows:

We simply get noticeably better I/O performance [with an SSD] at a lower cost than we could get with conventional hard drives. We have many customers who need more I/O performance than four 15,000 RPM SAS drives can provide in a RAID 10 configuration, not to mention the fact that such an upgrade itself requires moving to servers with a large chassis supporting more than four drives, equipped with a large RAID card, etc. Other configurations require more than 16 drives with a spindle speed of 15,000 rpm to provide the required level of I/O performance. Switching to one SSD (or a couple of pieces in a RAID configuration) greatly simplifies the server configuration and, in general, makes it significantly cheaper. Suffice it to say that usually one SSD is enough to replace at least four hard drives, and the AFR for four HDDs is about 20%, while for one SSD it is 1.6%.

Softlayer: about 5000 SSDs!


Softlayer: over 1000 SSDs!

The people at Softlayer are longtime friends of ours, and they also created the world's largest hosting company. So, they know a lot about data storage. Using nearly 5,000 SSDs, they provided us with an impressive amount of data to analyze. Here is a report provided by Softlayer.

Storage device Number of disks in the company AFR rating Current disk life
Intel 64 GB X25-E (SLC) 3586 2,19% 2
Intel 32 GB X25-E (SLC) 1340 1,28% 2
Intel 160 GB X25-M (MLC) 11 0% less than 1
HDD drives 117 989 see Schroeder report

Softlayer's experience with SAS and SATA drive failure rates matches the Google report we discussed at the beginning of this article. Simply put, the failure rate of hard drives is directly proportional to the age of the drive, and in practice the results are very close to what Google and Schroeder's research has proven. In the first year of life, the drive failure rate (AFR) is 0.5-1% and gradually increases to 5-7% by the fifth year of life.

The failure rate of hard drives is not surprising, but the failure rate of solid-state drives turned out to be quite close to the AFR results for HDDs. Of course, SSD drives have only been in use for two years and you need to wait until 3-4 years have passed since the start of operation to find out whether the trend of increasing failure rates characteristic of magnetic drives will continue or not in relation to SSD drives.

Softlayer uses almost entirely SLC-based SSDs to avoid cell wear and tear issues from repeated write operations. Based on the company's use of drives, we know that none of the drives failed due to cell wear. But many of the failed SSDs failed without proper SMART warning. This is exactly what we have heard repeatedly from data center employees. As InterServer specialists noted, hard drives tend to fail more gracefully. SSDs often die suddenly, regardless of the cause of the failure, as noted by many end users around the world. Softlayer's experience is more varied than InterServer's, with some drives recoverable and others not. None of the 11 Intel X25-M series drives in Softlayer failed, but there are too few drives from this line to draw any conclusions based on this, and they have been in use for less than a year.

Is storage reliability that important?

Despite the fact that SLC solid-state drives represent only a fraction of the SSD market, we have received much more information on this type of drive than on models that use cheaper MLC memory. Even taking into account that the sample of drives in our review is 1/20 of the number of hard drives in previous reviews, the available information suggests that solid-state drives on SLC memory cannot be called more reliable than hard drives with SAS and SATA interfaces.

If you are a consumer, this fact allows you to draw important conclusions. SSD manufacturers are trying to focus on two main advantages of this technology: better performance and reliability. However, if storing data on an SSD is no more secure than storing data on a regular hard drive, then performance becomes the only real reason to purchase an SSD.

We're not saying here that SSD performance isn't important (or impressive). However, SSD technology itself currently has narrow specifics. If you were going to pit SSDs against HDDs in terms of speed characteristics, you would find an interesting fact: in terms of performance, a budget-class SSD outperforms an HDD by about 85%. The high-end SSD provides an 88% advantage over the HDD, which is also not very impressive.

This rather subtle difference explains why companies like Intel focus on the reliability of SSDs. At the recent presentation of the new SSD 320 line, Intel again tried to play on this motif, using information on drive returns from the Hardware.fr website as proof of the reliability of its products. Undoubtedly, the excellent reputation of Intel SSD drives is the answer to the question of why we have so much information on SSDs of this brand. But the Hardware.fr data provided by Intel does not seem to correspond to the real state of affairs.

The performance of SSD drives will only increase, while the most advanced manufacturers will reduce the cost of such drives. However, this means that manufacturers will have to look for other ways to differentiate their products.

As long as new SSDs - even high-end ones - continue to reveal obvious bugs with firmware and other shortcomings, consumers who are primarily interested in the reliability of data storage will view SSD technology as not mature enough. Therefore, we believe that today reliability should become the main goal of SSD evolution.

Intel gave consumers a serious boost of confidence by raising the company warranty period for the new SSD 320 line from three to five years a few months ago. Competing mainstream SSD models based on the first and second generation SandForse controllers, as well as the Marvell controller with a SATA 6 Gb/s interface, continue to be sold with a three-year warranty. Enterprise-class drives also generally come with a five-year warranty. It is clear that this encourages vendors to sell systems equipped with more reliable drives in order to reduce warranty costs for three or five years. But, of course, it is difficult to turn a blind eye to the “childhood diseases” of SSD technology, such as the need to update the firmware, which, by and large, also affect the performance of solid-state drives.

Explanations on the issue of reliability

NAND-based hard drives and storage devices sometimes fail due to various factors due to their unique architecture and design. When we talk about the reliability of hard drives, what comes to mind is the fact that they are based on mechanical parts, some of which are in motion while the drive is operating. And although hard drives are designed to very strict tolerances, each part has a certain service life.

We also know that SSD drives are free from such problems. Their "solid state" nature essentially eliminates the risk of damage to the read head or spindle failure.

But storing data on an SSD is inherently related to virtualization, since it is impossible to physically allocate static LBA space here, as on a hard drive. Therefore, other factors arise that determine the reliability of the drive. Firmware is the most significant of them; we see the impact of this factor whenever we hear about problems with the SSD.

Over the past three years, all bugs in Intel SSD drives have always been resolved by updating the firmware. Crucial's problems with the m4's energy management have been resolved with the release of new firmware. And we saw that SandForce’s most famous partner, OCZ, responded to numerous consumer complaints by releasing several firmwares at once. In fact, the SandForce case is most illustrative. Since SSD manufacturers may use different firmware as a means of differentiating models, drives based on SandForce controllers from different manufacturers may obviously have different bugs specific to a particular firmware. This fact, undoubtedly, only complicates the task of increasing the reliability of solid-state drives.

If we leave the specifics of SSDs aside, now we need to determine the reliability of drives from different manufacturers. The problem here is that the way each vendor, reseller, or consumer measures this metric is slightly different, making objective comparison nearly impossible.

In particular, we were very impressed by Intel's SSD presentation at IDF 2011, which focused on reliability. But in a discussion with ZT Systems, whose data was provided by Intel, we found out that the AFR rating of 0.26% does not take into account the number of drives and refers only to “confirmed” errors. In fact, if you are an IT manager, then the frequency of “unreported” errors is also important to you. We are talking about situations when you send a defective product to the seller, and he replies that everything is fine with the disk. This does not mean that the drive is free of problems, since the cause could be a specific configuration or other application factors. In fact, there are many real-life examples of this kind.

"Unreported" errors tend to happen 2-3 times more often than "approved" errors. In fact, ZT System provides different data on the frequency of “unapproved” errors - 0.43% for 155,000 Intel X25-M drives. But we are again faced with the fact that this data is not sorted by drive life, since drives are considered in groups. According to ZT System CTO Casey Cerretani, the final value is currently just being calculated, but we can roughly talk about an AFR of 0.7% in the first year of operation. Of course, this number still means nothing in terms of long-term reliability, which is one of the main problems when assessing the reliability of SSDs compared to HDDs.

The main conclusion is that we now know what impact different methods for assessing the reliability of drives have on the final result. Moreover, only time will tell how much the reliability of SSD drives exceeds the corresponding indicator for HDDs. But now you know for sure that it is now impossible to draw any unambiguous conclusion, since a lot of the initial data is in doubt.

As a conclusion

Our report on data centers only covers the failure rate of Intel SSDs, since drives from this manufacturer are currently the most trusted by large enterprises. Given the problems with determining the reliability of SSDs, we deliberately do not set out to find the most reliable manufacturer, but Intel's marketing department employees seem to get their salaries for a reason.

The Google study notes the following: "It is known that the failure rate largely depends on the model, manufacturer and age of the drive. Our data does not contradict this fact. But the majority of failures observed over time are related to the age of the drive."

The experiences we've learned from data centers apply to all SSDs. One plant manager told us that he thinks the price of the OCZ Vertex 2 is great, but their reliability is terrible. At the end of last year, his company was launching a new system, on the occasion of which it purchased about 200 Vertex 2 drives, 20 of which did not work upon arrival. And this is not the first person to say something like this.

What does this mean for SSDs in practice?

Let's look at everything presented here from some rational perspective. Here's what we learned about hard drive reliability from Google and Schroeder research:

  1. MTBF says nothing about reliability;
  2. The annual failure rate (AFR) is higher than the manufacturer claims;
  3. Discs do not tend to fail in the first year of operation. The failure rate gradually increases with the age of the drive;
  4. SMART is not a reliable system that determines the imminent failure of a disk;
  5. The failure rates of "consumer" disks and "enterprise"-class drives are very close;
  6. The failure of one drive in the array increases the risk of similar behavior of other drives;
  7. Temperature has almost no effect on the reliability of the drive.

Thanks to Softlayer and their fleet of 5,000 SSDs, we know that the first four statements also apply to SSDs. As we saw in both HDD studies, the controller, firmware and interface (SAS vs SATA) have a significant impact on their reliability. For SSD drives, the main factors are also the controller and firmware, and their role is even higher. If it is true that cell wear due to repeated rewriting operations does not play any role in the failure statistics of SSD drives and the quality of MCL memory used in “consumer” drives is comparable to SLC, the conclusion suggests itself that Enterprise-class SSDs, in general, no more reliable than “consumer” ones.

Fewer disks - higher reliability

Of course, for enterprise-class storage systems, not only reliability is important, but also performance. To achieve high I/O performance, IT professionals have to create RAID arrays based on hard drives with a spindle speed of 15,000 rpm. Often, an upgrade to increase the number of I/O operations leads to the purchase of a new server, equipped with a more powerful RAID card and allowing the installation of more drives. Given the excellent I/O characteristics of solid-state drives, if they were used, it would be possible to limit themselves to a much more modest server configuration, not to mention the energy savings and lower temperatures.

There is another interesting point here.

The frequency of individual drive failures for a large array will be higher: according to Schroeder's research, after one drive in an array fails, the likelihood of other drives failing increases. In addition, the probability of failure of one of the disks in the array will be significantly higher, since the mathematical factor begins to play a role here.

In this case, we do not raise the topic of data safety, which depends on the RAID level and other factors. It is clear that from the point of view of data safety, one SSD will not replace two mirrored HDDs, despite the fact that the probability of failure for it will be lower than for one of the disks in the system. However, if we are talking about a large RAID system, then it is quite obvious that it is more reliable to have a configuration on four SSD drives than a system with 16 HDDs of comparable speed.

The very fact of using an SSD does not eliminate the need for data redundancy for RAID or backup. But instead of creating cumbersome RAID configurations on HDDs, you can limit yourself to a much simpler solution based on solid-state drives. As Robin Harris writes on StorageMojo: "Forget RAID, just copy your data three times".

Redundant storage on SSD does not result in high costs. If you work in a medium to large business, you only need to copy information from a high-performance SSD drive to an HDD, which is used for backup.

The idea of ​​getting more performance while spending less money is not new. SSDs actually offer extremely high I/O performance, high reliability, and data redundancy at a fraction of the cost of a bulky RAID configuration. At the same time, an array on HDD can exceed its counterpart on SSD in terms of disk space. Today, the price per gigabyte for solid-state drives is still too high and the issue of placing data on SSDs should be approached wisely, because it is unlikely that it will be possible to store all the data on them.

About the same for desktops

All of the above applies to servers. Let's put the responsibility for making the decision to switch or not switch to SSD on data center employees.

If the conversation turns to desktop systems, then we have no reason to assume that SSDs are more reliable than hard drives. One way or another, recent events with recalls of SSD drives and bugs in firmware have clearly shown that the limited number of rewrite cycles of NAND cells is currently far from the main disadvantage of the technology.

After all, any drive is an electronic device, whether it has moving parts or not. And the fact that solid-state drives do not have such parts does not fully indicate their reliability.

We asked the question to specialists from CMRR (Center for Magnetic Recording Research), a research center that has comprehensive information about magnetic storage systems.

Dr. Gordon Hughes, one of the main developers of SMART and Secure Erase technologies, notes that both HDDs and SSDs are pushing the boundaries of their respective technologies in their evolution. And when this happens, the goal is not to create the most reliable drives in the world.

As Dr. Steve Swanson, a NAND memory researcher, notes: "It's not like manufacturers make their drives as reliable as they can make them. They make drives as reliable as it makes sense financially.". The market determines the cost of drive components and it cannot be higher than a certain value.

For example, NAND memory manufacturers continue to produce 50 nm chips, which have a higher write cycle life than 34 nm and 25 nm chips. But the cost of $7-8 per gigabyte will not allow the use of such modules in drives aimed at the mass market.

Perhaps the biggest irritation is the fact that every vendor sells hard drives and SSDs without providing objective data on their reliability, although they all clearly know the true state of things, selling millions of devices a year (according to IDC, in 2009 they sold 11 million SSD) and recording every return.

Undoubtedly, the frequency of breakdowns depends on many factors, some of which are beyond the manufacturer’s control (quality of delivery, specifics of the drive’s operation). But under favorable circumstances, HDDs reach 3% AFR in the fifth year of operation, which is quite comparable to the same figure for SSDs. It is not surprising that experts from CMRR say that today SSDs do not provide higher reliability compared to hard drives.

SSD reliability is a sensitive topic, and we've spent a lot of time talking to vendors and retailers to conduct our own research into mass-market SSDs. And the only conclusion that can be drawn right now is that any information from the SSD manufacturer must be treated with a certain degree of skepticism.

It is worth noting that Intel SSDs today enjoy the highest level of trust among consumers and information from detail centers is invariably based on SLC drives of this brand as the “gold standard” for SSDs. But according to Dr. Hughes, there is no reason to believe that Intel products are any more reliable than the best HDD models. We do not have the opportunity to study the failure rate of SSDs that have been in use for more than two years, so it is quite possible that these statistics will change in one direction or another.

Should you refrain from buying an SSD now? If you protect your data by regularly backing up your files, then there is no reason to avoid using SSDs. For example, we use SSDs on all of our test platforms and most workstations.

The purpose of this review was to determine whether SSDs are really so reliable that backing up the information stored on them can be forgotten as a relic of the past. Now we know the answer to this question.

The reliability of hard drives has been well studied in massive studies and this is not surprising since this type of drive has been in use for a very long time. Over time, we will undoubtedly learn much more about SSD reliability.

Probably the most popular question in our technical support is “why is my disk health 80%”? Or in other words - how do you consider the health of the disk?
And the second most popular is where does the disk life date come from? “Why does it write to me that it is “not yet defined”? Here you will find answers to all these questions.

Brief and simple.

1. We do not count the percentage of disk health. It is calculated and reported by the SSD disk itself. Or in other words, this is the disk manufacturer's data.

2. The expected service life is calculated depending on the dynamics of changes in the health of the disk, which in turn indirectly depends on disk recording activity. If you have not defined it yet, it means that not much data is being written, just wait - usually a maximum of a week after the first launch. (Why indirectly- in details )

More details - for those who want to understand or technical specialists.

Disk health.

Since the days of hard drives (HDD), the S.M.A.R.T self-diagnosis system has been known. (SMART) built into all modern hard drives. It constantly monitors various parameters of the technical condition of the disk and reports them in relative values. As soon as the parameter values ​​drop below a critical level, the disk is considered unreliable and the manufacturer recommends replacing it. However, in practice, it happens that the disk continues to work normally, and the manufacturers themselves say that SMART is a recommendation service, and not an absolutely accurate forecaster.

Unlike hard drives, in the world of SSDs everything is more certain. Flash memory, on which SSD drives are built, has a precisely known resource of use - 10,000 rewrites (to put it simply, the exact number depends on the type of memory used in the SSD). All disks contain firmware that monitors the uniform use of all memory cells and tracks how many rewrites have been made, what is the remaining resource of the SSD disk. In the final form, it is this data that is reported by the disk firmware in one of the S.M.A.R.T parameters. with the eloquent name SSD Life Left (SSD life left) or Media wear out indicator (Media wear indicator) - and it is this parameter that the SSDLife program displays in a convenient and user-friendly form.

Of course, the first question the user will have is what will happen when the disk wear reaches 100%? (health will become 0%) See the answer to this question at the end of this page.

Estimated service life (lifetime)

So, we know exactly the technical resource of the SSD disk and can track its changes. By analyzing the dynamics of decreasing health (increasing wear and tear), you can use mathematical calculations to predict the date when health will decrease to 0% (wear will be 100%). That's exactly what SSD Life does.

note: by the way, some manufacturers cite the total volume recorded on the disc as one of the indicators of the service life of the disc. For example, Intel on X25-M drives guarantees a total write volume of about 37TB (20GB per day for 5 years - “The drive will have a minimum of 5 years of useful life under typical client workloads with up to 20GB host writes per day "). However, why can’t we rely on this information to analyze the disk state?

Why can't you calculate the date right away?

This is a simple mathematical reason - in order to calculate right away, we need to know at least the date when the first write to the disk occurred - but, unfortunately, it does not provide such information. Therefore, after the first launch of SSDLife, we need to monitor the intensity of use of the SSD disk for some time in order to determine its average load. Of course, depending on changes in disk usage activity, this date will change.

Why does the date change suddenly?

In some cases, the service life can change dramatically - this happens if the volume of recording to the disk has been sharply increased. For example, you installed some large toy. But there is no need to worry - in just a few days, SSD Life will understand that this was a temporary surge, the disk will return to its previous normal recording volumes, and will adjust the end-of-life date.

), I would like to talk separately about the service life of an SSD drive. After all, people are often afraid to change their traditional hard drives to new generation drives, explaining this by the fact that SSD drives have a rather limited service life. Is this really so? Let's try to understand this issue. So, how long does an SSD drive last??

By the way, many people call SSD drives SSD drives. This is fundamentally wrong. For what is a disk? It's something round and flat. But SSD drives do not have such parts. It only contains chips and microcircuits. So let's call a spade a spade.

Types of SSD drives: SLC, MLC, TLC

Perhaps, speaking about the service life of SSD drives, first of all we need to mention their types. In general, there are only three of them - SLC, MLC and TLC.

  • SLC– the very first type of SSD drives. It is distinguished by high reliability and high cost. Has the largest resource of rewrite cycles – 10,000.
  • MLC- a more modern option. Manufacturers have tried to reduce the cost of drives by increasing the bit size of each memory cell to 2 bits. But at the same time, along with the price, the resource of rewrite cycles has also decreased by up to 3000 times.
  • TLC– 3 bits per cell. The price has become even more affordable. The resource of rewrite cycles has decreased by up to 1000 times.

SLC, MLC, TLC (bits)

It is worth noting that now most often you can find MLC and TLC drives on sale. Their service life has been increased by introducing new auxiliary technologies. For example, now in drives the load of data rewriting is evenly distributed across all cells, which prevents any cells from failing prematurely. Auxiliary cache memory of the SLC type is also actively used in MLC and TLC drives, which also significantly increases the service life of SSD drives.

How to calculate the service life of an SSD drive (formula)?

There is an approximate formula for calculating the service life of SSD drives. You need to know about it, but you shouldn’t rely on it entirely.

You need to take the number of rewrite cycles, multiply by the volume of the SSD drive and divide by the amount of information recorded per day.

So, we take an MLC type SSD drive, say, 120GB. Let's say we record about 20GB per day (which, by the way, I very much doubt). What happens?

3000 cycles * 120GB / 20GB = 18000 days (49 years)

At first glance, the calculation may seem completely meaningless. But do not lose sight of the uniform distribution of the load across all storage cells. How can this be explained in accessible language? Suppose, purely theoretically, that you have half of the disk full of music and you are not going to delete it, and the second half of the disk is strictly exploited by writing/deleting new files, temporary files, a swap file, etc. Thus, to prevent the drive from failing prematurely, the recorded information is constantly moved, thereby freeing up less worn cells for frequently overwritten files.

Thus, again theoretically, the daily recording volume can increase up to 10 times (this is the maximum). Therefore, our formula becomes:

3000 cycles * 120GB / (20GB * 10) = 1800 days (4.9 years)

Again, I repeat, this is the maximum. Firstly, you most likely will not overwrite 20GB per day. Secondly, the daily volume increase factor can be less than 10, or even much less.

Conclusion:

The lifespan of your SSD drive largely depends on how you use it.. For the average user, the service life of such a drive will be practically no different from the service life of a regular HDD drive. But if your activity involves constant rewriting of information on the drive, say more than 60-80GB per day, then for you the difference in the service life of an SSD drive and an HDD drive will be very noticeable, keep this in mind.

And one more very important addition:

Did you read to the very end?

Was this article helpful?

Not really

What exactly did you not like? Was the article incomplete or false?
Write in comments and we promise to improve!

Are you already the proud owner of an SSD or are you considering buying one? Then I'm sure you're wondering about how long it will last. I'll start the series of articles about SSDs with a story about rewrite cycles.

Everyone knows that SSD flash memory has a limited number of rewrite cycles. Often the next conclusion is that you need to do your best to reduce the amount of data written to the disk.

And if you also know that in fact an order of magnitude larger amount of data is recorded on the SSD than on the HDD, then it’s generally scary to take the disk out of the box :)

In practice, the finite number of rewrite cycles does not matter for the vast majority of users. The resource of modern SSDs and the logic of their controllers allow them to withstand huge volumes of recorded data.

Today on the program

How does an SSD work?

Let's quickly go over some of the operating principles of SSDs.

Garbage collection

SSD flash memory is built from blocks, which in turn are made up of pages. Data is written to separate block pages, and it is impossible to update the data simply by overwriting the old ones. Moreover, you can only erase the entire block!

Therefore, first, the necessary data is moved from the pages of one block to another, and only then the entire block with the remaining unnecessary data is erased, thereby freeing it up for a new entry. This process is called garbage collection.

TRIM

TRIM is an operating system feature that helps mark unwanted data in a special way. Therefore, the controller does not need to move them around by writing them to other blocks. This increases the recording speed, and most importantly, significantly reduces the number of rewrite cycles.

In modern Windows operating systems this function included(checked by the command above), but it’s not at all a fact that it works.

Wear leveling

The resource of a solid-state drive directly depends on the number of cycles of rewriting memory blocks. If you regularly write data to the same block, it will quickly die, thereby reducing the disk capacity. Therefore, the controller’s task is to distribute data evenly across all SSD blocks.

Increasing recording capacity

Obviously, garbage collection and wear leveling lead to an increase in the actual amount of data written to the SSD (write amplification). Unlike HDD, this volume is much larger than the programs and system dictate.

There is no fixed multiplier, since the increase in volume depends on a number of factors, including the type of data being recorded.

Sequential recording (for example, copying files) does not entail a significant increase in volume, since it is possible to fill the blocks evenly. Random entry(for example, OS operation) involves much more active movement of data across solid-state disk blocks.

One way or another, the controller is tasked with efficiently distributing data on the disk, ensuring maximum service life of all memory blocks.

Drive life estimate

Now my main system drive is Kingston Hyper-X 3K. “Hyper-X” is just a marketing name for the line, but “3K” reveals one of the main technical characteristics of the disk - its resource in terms of the volume of recorded data.

3K or 3,000 is the number of write cycles that the Intel 25nm MLC NAND flash memory at the heart of this drive can withstand. The Kingston Hyper-X model without the “3K” suffix is ​​also based on 25nm memory, but can withstand 5000 cycles.

Let's do the math using a hypothetical 120GB disk that writes 12GB per day (that's a lot, as you'll see below). Let's say that under your load the controller increases the recording volume by 10 times, which is also taken with a large margin.

In this situation, you go through one rewrite cycle per day. Dividing the number of cycles by 365, we get 8.219 years for 3,000 cycles, and 13.698 years for 5,000 cycles (rounded values ​​in the table). After this, theoretically, your data should be intact for another 12 months, but it is possible that it is read-only.

What manufacturers say

Unfortunately, manufacturers are partly to blame for the fact that a number of users do not use the full potential of their devices. In official data, disk endurance may not be indicated for all models, may be on the margins of the documentation, or may be absent altogether.

But there is always a mean time between failures (MTBF). It could be 1 or 2 million hours, but who cares?

An example of a disk for ordinary consumers

This is exactly how uninformative things were with my first solid-state drives Kingston V100- 64 and 128GB. In 2010, these were typical SSDs for ordinary consumers - not the fastest and relatively inexpensive.

However, at that time there was such a phrase on the company’s website (now the page no longer exists, but Google remembers).

Recommended workloads for the SSDNow series M, V+ and V is up to 20GB writes per day for three years. For the “E” Series we recommend writes up to 900GB per day for the 32GB and 1.8TB per day for the 64GB SSD.

The resource of a 64GB disk is 20GB per day for three years, i.e. about 22 TB. Please note that for older series it is significantly higher.

That was a long time ago, and those discs are no longer in production. In the series that replaced them Kingston V200 And V300 with the same three-year warranty it is already clearly stated:

  • 60GB: 32TB
  • 120GB: 64TB
  • 240GB: 128TB

The 64GB drive lives in my netbook, and the 128GB drive has been working as a system drive in my main PC for exactly a year.

Now it has become auxiliary, giving way to Hyper-X.

An example disc for enthusiasts

Have you read reviews, comparisons, testimonials before purchasing an SSD? Me too! Based on personal experience and the good price/quality ratio at that time, I took the above-mentioned Kingston Hyper-X 3K, which was precisely positioned for those who want to drive faster.
Please do not consider the mention of this or any other drives as my recommendation for purchase. These are just examples.

In addition to higher operating speed, it has a deeper resource (at the time of publication of the article, this link contained the data below):

  • 90GB: 57.6TB
  • 120GB: 76.8TB
  • 240GB: 153.6TB

In other words, for a 120GB drive, the company guarantees an average of 60GB writes per day over the course of three years of SSD support.

Let's compare this SSD with other SSDs that also use Intel 25nm MLC NAND synchronous memory. The Intel 330 drive (with exactly the same memory and controller as in HyperX 3K) appeared in the summer of 2012, and its service life is formulated as follows:

The SSD will have a minimum of three years of useful life under typical client workloads with up to 20 GB of host writes per day.

20GB per day is about 22TB over the three-year warranty period, although it is unclear whether this depends on the storage capacity. It's interesting that Kingston is more optimistic in its assessment of Intel flash memory than the NAND manufacturer itself :)

The only problem is that not all SSDs can extract the necessary information. For example, Kingston only has this feature in new models, while Samsung drives generally hide these numbers.

This is information about my Hyper-X after three months of work. ID 241Lifetime Writes From Hosts denotes the cumulative amount of data recorded in gigabytes. It turns out that I write about 7GB per day to the disk. By the way, ID 231 indicates the remaining disk resource as a percentage.

I hibernate my 8GB PC at least once a day. Not to mention that in addition to my daily work, I have my main virtual machine running on this disk.

If you believe the declared resource of 76.8 TB, in this situation this drive will last me for 30 years. Hm... do you remember that 30 years ago Windows was installed from five 5.25" floppy disks? :) SSD Life is less optimistic, “only” 9 years.

Where are your disks from ten years ago now?

What will happen in 10 years

Out of curiosity, I found a rare receipt. Here is a WD 40GB, purchased along the way between unknown fish, chicken and orange juice (apple, according to updated data;)

I'm sure this drive never failed, but I have no idea where it is now! By the way, today, for this money, mid-class SSDs with a capacity of 128GB are offered.

In the coming years, the number of SSD shipments will increase, and the volume of disks will grow.

According to Gartner, in 2016 the average volume of client SSDs will be 319GB. In another 7 years? I think you will no longer need your old 64 or 128GB disk, even if it is still alive.

Don't fall victim to a stereotype

The cause of SSD failure can be anything, regardless of the manufacturer. And, as a rule, the cause of death is not the spent resource of rewrite cycles.

At one time, Kingston regularly introduced discounts on the V100 series, because its reputation was tarnished by the first discs. They often turned into bricks due to firmware problems, and with its update the problem disappeared. The problem was solved in the same way for my brother, who previously returned two identical drives to OSZ in a row.

With this post, I strongly encourage you to believe that the limited number of rewrite cycles is not a significant factor in the lifespan of a modern SSD in a home PC.

Of course, if you regularly write terabytes of torrents to it, it will last less, but for such purposes it is logical to use hard drives with a lower cost per gigabyte of data. And for the system, programs, games and personal files you need to use the full potential of the SSD!

Discussion and poll

In the next blog post, I'll break down the common mistakes people make when "optimizing" their SSDs. All the main points are ready, but I also hope that your comments will help me supplement them :)

Therefore, I ask you:

  1. Tell us what kind of SSD you have, how long you have owned it, and share your impressions of use.
  2. Provide screenshots:
  • performance (CrystalDiskMark)
  • characteristics of S.M.A.R.T. and life expectancy (CrystalDiskInfo or SSDLife)
  • List the steps you have taken to optimize your SSD.
  • The survey has been removed because... The web survey service has ceased to exist.

    Do you want your personal computer or laptop to work at high speed and do without a constantly noisy hard drive? Then install an SSD drive in it. This device works much faster than a conventional hard drive and, importantly, works completely silently.

    The high speed of the SSD is noticeable to the user not only when copying files, but also during normal work with the computer: the operating system loads twice as fast, programs start almost instantly, and standard switching occurs without delays, which usually occur due to storing information and data on HDD. But how do SSD drives work? Why are they so much faster than HDD? What do you need to know about them when using them? Let's figure this out together.

    What is an SSD drive?

    This is an abbreviation for the English phrase Solid State Drive, which means solid-state storage medium. Here we are talking about a computer drive that stores information inside flash memory chips, the same principle of operation for conventional USB drives. As with the standard one, when the SSD drive is disconnected from the power source, the recorded data is completely saved for further reading. However, unlike standard hard drives, SSDs have no moving parts. That is why they operate absolutely silently and are not sensitive to shocks and vibrations.

    How fast do SSD drives work?

    SSDs can transfer data three and even four times faster than regular hard drives. In normal operation, they are capable of achieving information reading speeds of over 400 MB/s; when writing, they operate at a speed of 250 MB/s. For comparison: traditional HDDs (standard 3.5 inches) read and write information at a speed of approximately 115 MB/s, and hard drives for laptops (standard 2.5 inches) operate at only 75 MB/s.

    The greatest benefits of SSD drives come when accessing short-length data blocks that are randomly scattered throughout the media. The SSD samples we tested averaged nearly 22,000 reads and writes per second—that's about 100 times more than regular hard drives. This is a very important parameter, for example, when starting a personal computer, when the operating system must read many different drivers from the disk. In this case, standard hard drives seem to be slower - they constantly have to move the mechanical read/write heads back and forth when reading information.

    It is also worth mentioning the uneven speed of standard HDDs - the closer to the end of the disk the information being read is located, the slower the reading and writing process occurs. SSD drives are completely free of this drawback. Speed ​​graphs when testing SSD drives are always almost horizontal, both when reading from the disk and when writing to it.

    Are all SSD drives equally fast?

    Let's be honest: no. First of all, different disk models differ in the total number of read and write operations performed per second. The interface type also directly affects the maximum data transfer rate. Some drives with a SATA 2 interface installed achieve a write speed of no more than 250 MB/s during normal operation. The vast majority of SSD models now have the new SATA 3 interface and reach a maximum write speed of 500 MB/s.

    Is it possible to change a standard hard drive to an SSD?

    Of course, no one bothers you to change the disk. Almost all modern SSDs have the same parameters as standard 2.5-inch hard drives, so they can be installed in laptops without any problems. For mounting on a desktop computer, stores sell special 3.5-inch adapter frames.

    Of course, SSD drives are very expensive now. For example, 120 GB disks cost in stores from 4 thousand rubles. A regular 3 TB hard drive costs similar money. In practice, in most cases, SSD drives are used by users as a system drive for . They also store programs and games to increase the speed of the computer when working with them. Images, music files and videos can be easily stored on a simple hard drive without any obvious impact on the speed of the entire system.

    What size SSD drive is optimal?

    60-GB SSDs are relatively inexpensive, they can be purchased for two to three thousand rubles, but they have a small capacity, especially by today's standards. Even if you install only an operating system and several software products on such a disk, you may still run out of space in the near future. Additionally, 60GB SSDs are slower than 120GB and 240GB models. 120-gigabyte SSDs (cost about three to four thousand rubles) have enough space both for installing the operating system and for various programs. After installing such a disk, you will experience a significant increase in system performance. Music and video files should also be placed on a separate hard drive.

    240 GB SSDs are currently unreasonably expensive for the average user - they cost about nine thousand rubles. They are perfect for users who store large amounts of data on their machines, but do not want to insert additional hard drives into them.

    Is the average lifespan of an SSD shorter than that of a regular hard drive?

    The working resource of disk memory cells is, of course, limited. Modern flash chips can withstand, according to official data from manufacturers, no more than 5 thousand write/erase operations. That is why the internal controller redistributes write operations so that absolutely all disk cells are used in the most optimal way. In fact, it turns out that the service life of SSD drives is almost the same as that of conventional hard drives.

    In a test with six SSDs, ComputerBild magazine simulated their use for almost ten years. As a result, even thousands of write, read and erase operations, as well as repeated turning on and off of the disk, did not cause it to fail.

    What is important to know when using SSD drives?

    Computer: The noticeable effect of using an SSD drive is noticeable only in desktop computers and laptops with installed Dual Core processors (starting from the Core i3 model) and 4 GB of RAM. In addition, the computer must support the SATA 3 interface.

    Operating system: It has been proven that SSD drives work optimally only with the Windows 7 operating system; the use of drives with older versions is not recommended by manufacturers. This is because only Windows 7 disables defragmentation for SSD drives, which is harmful to them.

    In addition, it is equipped with the important TRIM function. Using this function, the operating system lets the SSD controller know which data blocks have already been cleared of recorded information and can be reused. This feature really allows you to increase the longevity of SSD drives.

    Memory filling: If the SSD drive is almost completely full of information, then the speed of accessing files decreases significantly. The service life of the device also suffers from this, since the remaining free areas are often overwritten and begin to fail. This is why manufacturers recommend maintaining at least 15% of free disk space.