Program for working with ssd kingston. Checking the SSD for errors

There is an opinion that one of the most significant disadvantages of solid-state drives is their finite and, moreover, relatively low reliability. Indeed, due to the limited resource of flash memory, which is caused by the gradual degradation of its semiconductor structure, any SSD sooner or later loses its ability to store information. The question of when this can happen remains key for many users, so many buyers, when choosing drives, are guided not so much by their performance as by reliability indicators. Manufacturers themselves add fuel to the fire of doubts, who, for marketing reasons, stipulate relatively low volumes of permitted recording in the warranty conditions for their consumer products.

However, in practice, mass-produced solid-state drives demonstrate more than sufficient reliability so that they can be trusted to store user data. An experiment that showed the absence of real reasons for worrying about the finiteness of their resource was conducted some time ago by the website TechReport. They carried out a test that showed that, despite all the doubts, the endurance of the SSD has already increased so much that you don’t have to think about it at all. As part of the experiment, it was practically confirmed that most models of consumer drives are capable of transferring records of about 1 PB of information before they fail, and especially successful models, like the Samsung 840 Pro, remain alive after digesting 2 PB of data. Such recording volumes are practically unattainable in a conventional personal computer, so the lifespan of a solid-state drive simply cannot come to an end before it becomes completely obsolete and is replaced by a new model.

However, this testing failed to convince skeptics. The fact is that it was carried out in 2013-2014, when solid-state drives built on the basis of planar MLC NAND, which is manufactured using a 25-nm process technology, were in use. Such memory before its degradation is capable of withstanding about 3000-5000 programming-erasing cycles, but now completely different technologies are in use. Today, flash memory with a three-bit cell has come to mass-produced SSD models, and modern planar technological processes use a resolution of 15-16 nm. At the same time, flash memory with a fundamentally new three-dimensional structure is becoming widespread. Any of these factors can radically change the reliability situation, and in total, modern flash memory promises only a resource of 500-1500 rewrite cycles. Are drives deteriorating along with memory, and do we need to start worrying about their reliability again?

Most likely no. The fact is that along with changes in semiconductor technologies, there is a continuous improvement of controllers that control flash memory. They introduce more advanced algorithms that should compensate for the changes occurring in NAND. And, as manufacturers promise, current SSD models are at least as reliable as their predecessors. But objective grounds for doubt still remain. Indeed, on a psychological level, drives based on the old 25-nm MLC NAND with 3000 rewrite cycles look much more solid than modern SSD models with 15/16-nm TLC NAND, which, all other things being equal, can guarantee only 500 rewrite cycles. The increasingly popular TLC 3D NAND, which, although produced according to higher technological standards, is also subject to stronger mutual influence of cells, is also not very encouraging.

Taking all this into account, we decided to conduct our own experiment, which would allow us to determine what kind of endurance can be guaranteed by current drive models based on the currently most popular types of flash memory.

Controllers decide

The finite lifespan of drives built on flash memory has not surprised anyone for a long time. Everyone has long been accustomed to the fact that one of the characteristics of NAND memory is a guaranteed number of rewrite cycles, after exceeding which the cells can begin to distort information or simply fail. This is explained by the very principle of operation of such a memory, which is based on capturing electrons and storing charge inside a floating gate. The change in cell states occurs due to the application of relatively high voltages to the floating gate, due to which electrons overcome a thin layer of dielectric in one direction or the other and are retained in the cell.

Semiconductor structure of a NAND cell

However, this movement of electrons is akin to a breakdown - it gradually wears out the insulating material, and ultimately this leads to a breakdown of the entire semiconductor structure. In addition, there is a second problem that entails the gradual deterioration of cell performance - when tunneling occurs, electrons can get stuck in the dielectric layer, preventing the correct recognition of the charge stored in the floating gate. All this means that the moment when flash memory cells stop working normally is inevitable. New technological processes only aggravate the problem: with decreasing production standards, the dielectric layer only becomes thinner, which reduces its resistance to negative influences.

However, to say that there is a direct relationship between the resource of flash memory cells and the life expectancy of modern SSDs would not be entirely correct. The operation of a solid state drive is not a straightforward process of writing and reading to flash memory cells. The fact is that NAND memory has a rather complex organization and special approaches are required to interact with it. Cells are organized into pages, and pages are organized into blocks. Writing data is only possible to blank pages, but in order to clear a page, the entire block must be reset. This means that writing, or even worse, changing data, turns into a complex multi-step process, including reading the page, changing it and re-writing it to free space, which must first be cleared. Moreover, preparing free space is a separate headache, requiring “garbage collection” - the formation and cleaning of blocks from pages that have already been used, but have become irrelevant.

Scheme of operation of flash memory of a solid-state drive

As a result, the actual volume of writes to flash memory may differ significantly from the volume of operations initiated by the user. For example, changing even one byte can entail not only writing an entire page, but even the need to rewrite several pages at once to first free a clean block.

The ratio between the amount of writes performed by the user and the actual load on the flash memory is called write gain. This coefficient is almost always higher than one, and in some cases it is much higher. However, modern controllers, through buffering operations and other intelligent approaches, have learned to effectively reduce write amplification. Technologies useful for extending the life of cells, such as SLC caching and wear leveling, have become widespread. On the one hand, they transfer a small part of the memory into a sparing SLC mode and use it to consolidate small disparate operations. On the other hand, they make the load on the memory array more uniform, preventing unnecessary multiple rewrites of the same area. As a result, storing the same amount of user data on two different drives from the point of view of the flash memory array can cause completely different loads - it all depends on the algorithms used by the controller and firmware in each specific case.

There is another side: garbage collection and TRIM technologies, which, in order to improve performance, pre-prepare clean blocks of flash memory pages and therefore can transfer data from place to place without any user intervention, make an additional and significant contribution to the wear of the NAND array . But the specific implementation of these technologies also largely depends on the controller, so the differences in how SSDs manage their own flash memory resources can be significant here too.

As a result, all this means that the practical reliability of two different drives with the same flash memory can differ very noticeably only due to different internal algorithms and optimizations. Therefore, when talking about the resource of a modern SSD, you need to understand that this parameter is determined not only and not so much by the endurance of the memory cells, but by how carefully the controller handles them.

The operating algorithms of SSD controllers are constantly being improved. Developers are not only trying to optimize the volume of write operations in flash memory, but are also introducing more efficient methods of digital signal processing and read error correction. In addition, some of them resort to allocating a large reserve area on the SSD, due to which the load on the NAND cells is further reduced. All this also affects the resource. Thus, SSD manufacturers have a lot of leverage in their hands to influence what final endurance their product will demonstrate, and flash memory resource is only one of the parameters in this equation. This is precisely why conducting endurance tests on modern SSDs is of such interest: despite the widespread introduction of NAND memory with relatively low endurance, current models do not necessarily have to be less reliable than their predecessors. Progress in controllers and the operating methods they use is quite capable of compensating for the flimsiness of modern flash memory. And this is precisely why the study of current consumer SSDs is interesting. Compared to SSDs of previous generations, only one thing remains unchanged: the resource of solid-state drives is finite in any case. But how it has changed in recent years is precisely what our testing should show.

Testing methodology

The essence of SSD endurance testing is very simple: you need to continuously rewrite data in the drives, trying to practically establish the limit of their endurance. However, a simple linear recording does not quite meet the purpose of testing. In the previous section, we talked about the fact that modern drives have a whole bunch of technologies aimed at reducing the write amplification factor, and in addition, they perform garbage collection and wear leveling procedures differently, and also react differently to the TRIM operating system command . That is why the most correct approach is to interact with the SSD through the file system with an approximate repetition of the profile of real operations. Only then can we get a result that ordinary users can consider as a guide.

Therefore, in our endurance test we use drives formatted with the NTFS file system, on which two types of files are continuously and alternately created: small - with a random size from 1 to 128 KB and large - with a random size from 128 KB to 10 MB. During the test, these randomly filled files are multiplied until more than 12 GB of free space remains on the drive; when this threshold is reached, all created files are deleted, a short pause is made, and the process is repeated again. In addition, the tested drives simultaneously contain a third type of file - permanent. Such files with a total volume of 16 GB are not involved in the erase-rewrite process, but are used to check the correct operation of the drives and the stable readability of the stored information: each cycle of filling the SSD, we check the checksum of these files and compare it with a reference, pre-calculated value.

The described test scenario is reproduced by the special program Anvil’s Storage Utilities version 1.1.0; the status of drives is monitored using the CrystalDiskInfo utility version 7.0.2. The test system is a computer with an ASUS B150M Pro Gaming motherboard, a Core i5-6600 processor with integrated Intel HD Graphics 530 and 8 GB DDR4-2133 SDRAM. Drives with a SATA interface are connected to the SATA 6 Gb/s controller built into the motherboard chipset and operate in AHCI mode. The driver used is Intel Rapid Storage Technology (RST) 14.8.0.1042.

The list of SSD models taking part in our experiment currently includes more than five dozen items:

  1. (AGAMMIXS11-240GT-C, firmware SVN139B);
  2. ADATA XPG SX950 (ASX950SS-240GM-C, firmware Q0125A);
  3. ADATA Ultimate SU700 256 GB (ASU700SS-256GT-C, firmware B170428a);
  4. (ASU800SS-256GT-C, firmware P0801A);
  5. (ASU900SS-512GM-C, firmware P1026A);
  6. Crucial BX500 240 GB (CT240BX500SSD1, firmware M6CR013);
  7. Crucial MX300 275 GB (CT275MX300SSD1, firmware M0CR021);
  8. (CT250MX500SSD1, firmware M3CR010);
  9. GOODRAM CX300 240 GB ( SSDPR-CX300-240, firmware SBFM71.0);
  10. (SSDPR-IRIDPRO-240, firmware SAFM22.3);
  11. (SSDPED1D280GAX1, firmware E2010325);
  12. (SSDSC2KW256G8, firmware LHF002C);

Not long ago I bought myself a solid-state hard drive on Aliexpress, in other words, an SSD, and even . The disk arrived, was installed and worked great for several months. But lately I began to notice that the disk began to “choke” often, and sometimes I just had to ROUGHly turn off the laptop. Doubt crept in: was my Chinese “friend” screwed up? How to check SSD functionality?

At the beginning I began to sin on Ubuntu, maybe it started to glitch? I often install new programs, and although Linux is much more stable in this regard than Windows, it can also be ruined.

And so yesterday I decided to reinstall the system so that I would have time. Decided . But it was not there! After the first installation, I was unable to access my home folder, which is encrypted.

Although I did not scan (perhaps in vain) to the end, it was clear that in general all the cells were working perfectly. But I didn’t calm down and downloaded another program - HDDScan, and scanned it for her.

And this program showed that my first sector was killed! Just one, could it cause such problems? Or is this program only suitable for regular HDDs? I don't know yet, but I know what I will do.

Since this is the first sector, when marking the disk, I will leave an unmarked area at the beginning so that this sector does not work. If this doesn't help, then I don't know what to do.

How to check SSD for errors in Linux?

In Linux, as I understand it, there is only a console program for this purpose (although maybe I was looking poorly), everything is checked like this:

Sudo badblocks -v /dev/sdc > ~/test.list

The badblocks utility will check the disk for bad sectors and produce a report in the test.list file, which will appear in the home directory. Yes, it’s not very clear, but you can still check it. Maybe you know better programs?

I will try to install Linux 15.04 on this SSD disk, I will test both the new Ubuntu (I haven’t tried installing it yet) and the disk at the same time. I’ll write down in the comments what came out of all this...

I have already written here more than once about antivirus versions for Linux, and now I decided to highlight another product a little - NOD 32 for Linux. It gave me this idea...

Not long ago I wrote that the best video editor on Linux is Cinelerra. Although, to be honest, he may be the best in Linux, but not in the world in general...

Did not find an answer to your question? Use the site search:

10 comments

    Today I tried to install Ubuntu 15.04 on this SSD disk, leaving the initial area of ​​the disk unallocated - the installation froze.

    I didn’t continue and decided to install Windows 7 - everything was installed, although I don’t know yet whether it will work without failures.

    I decided to do this: I inserted an SSD with Windows 7 into the laptop, and installed Linux on the external HDD. Now I can work in Windows if necessary, but I can boot into Linux from an external drive, so that everything is simple with Linux.

    I had some strange problem with the disk (only it was a simple disk), in lninuch it wrote that it was restarting something there, although it was working. In Windows it just started getting really stupid and crawled furiously across the disk. Although everywhere it showed that everything was fine with the disk.
    It turned out that I needed to switch to the IDE controller in the BIOS, instead of some strange new one, and everything became okay.

    And this happens, but SSD drives are still very different from regular ones and the technology is not yet well developed, which is why there are problems, especially on Chinese ones. But we need something cheaper!

    I personally have been using an SSD as a system drive for about six months now. I had a Sony VAIO laptop with a regular hard drive. Then I installed my SSD there. I’m running Ubuntu 14.04.3; rebooting, if necessary, takes 11-12 seconds - I personally measured it with a stopwatch :-) In the laptop, instead of a drive, there is an additional 1TB hard drive (mount point home directory) .
    I use BTRFS everywhere. Previously I used Ext4. I didn't notice any glitches.
    Oh yes! I have an SD card from Kingston for 120 gigs. Mounted as root.

Solid state hard drives have become a real innovation in the world of computer technology. The operating speed of SSD drives is many times higher than that of the average hard drive (HDD).

As a result, a large number of personal computers and laptops began to be produced complete with new drives, and users often resort to replacing the old type of drive with a new one.

Unfortunately, the standard capabilities of the Windows operating system do not provide the ability to fine-tune and optimize SSD drives. A large amount of software has been created specifically for these tasks. Among them, the free SSD Tweaker program is especially popular.

You can download SSD Tweaker for free from our website. It is available in the form of a kind of manager that has a simple and intuitive interface. With the help of the program, even not the most advanced users can flexibly configure an SSD disk for their OS, which is undoubtedly a plus in its piggy bank.

Using the power of SSD Tweaker, you can centrally configure your drive in just a few simple steps. All actions will be carried out semi-automatically with minimal participation from you.

So, after a few simple operations, SSD Tweaker will configure the Windows indexing mechanism, the system defragmentation service, adjust the system cache area and adjust all file system parameters.

Optimal use of hard drive memory can be achieved using the special Superfetch technology, which is available in the program's arsenal. Also, you can find special tools with which you can manage file swap settings. The DIMP system will help you monitor the power consumption status of the device.


Unlike other programs for configuring and optimizing the operation of solid-state drives, the utility can be used completely free of charge. In addition, the program interface can be presented in Russian.

SSD drives are a growing standard for storing information these days. It is distinguished by its high speed compared to hard drives, which had a monopoly in this area only 5-7 years ago. But you have to pay for speed: the resource of an SSD drive is seriously limited. How to check the functionality of an SSD drive is in our article.

It is worth noting that SSD drives are built on memory chips, similar to RAM, while their direct competitors, hard drives, use a magnetic surface and read heads. The use of microcircuits can significantly improve the speed of data reading, but imposes restrictions on the life of the drive. This resource is called a write cycle - it shows how many times information in a certain memory sector can be written and erased before it fails. Typically, this resource is 3-4 years - such a guarantee is usually given by the drive manufacturers themselves.

This does not mean that after this period has expired, the disk will immediately die and will not show signs of life. This process occurs individually for each drive. For some, the speed will simply decrease at first, some disk will simply stop writing data, that is, it will only work for reading - that’s not the point. The main thing is that the disk needs to be checked regularly so that in case of problems you have time to save important data.

So, let's figure it out.

CrystalDiscInfo

This is a free utility for Microsoft Windows users. A very simple program, the advantage of which is a completely intuitive interface. So:


Note! Based on other SMART tests presented in the list, you can also draw a conclusion about the health of the disk. Compare the worst and current test values ​​with the indicator in the “Threshold” column and analyze the data obtained. Important! Also pay attention to any glitches and read/write errors.

These are also important aspects, in the presence of which you should seriously think about changing the drive or at least a backup copy of the information contained on it.

DriveDx

An excellent program for monitoring the status of your drive if you use the macOS platform.

Note! The program is paid, however, you can use it in trial mode. Is it worth paying for the full version - try it, it's up to you to decide.

  1. Download the program from the official website of the developer. When you launch it, you will see a system warning that this program was downloaded from the Internet and may be dangerous. Allow the system to open the program.

  2. The interface of the program itself will open. At the top of the window you will see bars that indicate the status of the drive, and at the bottom you will see more detailed tests and SMART scan results.

  3. Study the contents of the window - there may be a colossal amount of useful information there. For example, it would be useful to know the total operating time of the drive or the number of power-on cycles.

  4. The “Prolems Summary” section contains reports on disk errors. If everything here is zero, you don’t have to worry, everything is in perfect order.

  5. What you should pay attention to is the “Health Indicators” line, located on the left side of the window immediately below the name of the drive that we are scanning. You will see a list of a large number of sensors that will show you the life of your drive from all angles.

So, in our case, it is worth paying attention to the temperature of the drive, as well as the fact that the rewrite cycle counter has already counted quite large numbers. Otherwise everything is fine.

Let's sum it up

It is a pity that at the moment not a single operating system has implemented a built-in function for monitoring the status of the drive. It's a pity, it would be very nice to be able to monitor your disk without unnecessary movements. But we can always download any monitoring application and see all the necessary information.

What recommendations would you like to give to ensure that your SSD serves you as long as possible? If there is an operating system on it, then try to move all operations with files to the hard drive, if you have one. Apply all the system improvements required for the SSD, there are many instructions on the Internet on this topic.

Important! Monitor the temperature - overheating can seriously damage the microcircuits. And most importantly, make backup copies of your data.

You can check the status as much as you like, but complete confidence that your disk will not die today is, in principle, impossible. Remember this. Good luck!

Video - How to check the performance of an SSD drive

June 19, 2010 at 1:03 pm

How I ruined an SSD in two months

  • Computer hardware

Epigraph

“Never trust a computer that you can’t throw out the window.”
Steve Wozniak

Two months ago I installed an SSD drive in my laptop. It worked great, but last week it suddenly died due to cell depletion (I believe). This article is about how it happened and what I did wrong.

Description of the environment

  • User: Web developer. That is, such things as virtual machines, eclipse, and frequent updates of repositories are in use.
  • OS: Gentoo. That is, the world is often “reassembled.”
  • FS: ext4. That is, a journal is being written.

So, the story begins in April, when I finally got around to copying the partitions to a 64GB SSD broom, purchased back in September. I deliberately do not tell the manufacturer and model, because I haven’t really figured out what happened yet, and it doesn’t really matter.

What did I do to make it work longer?

Of course, I studied numerous publications on how to take care of SSD drives. And this is what I did:
  • Put noatime for partitions, so that when accessing a file, the record of the last access time is not updated.
  • I increased the RAM to maximum and disabled swap.
I didn’t do anything else, because I believed that the computer should serve the user, and not vice versa, and unnecessary dancing with a tambourine is wrong.

S.M.A.R.T.

Three days before the fall, I became concerned with the question: how do I know how much happiness I have? I tried the utility smartmontools, but it displayed incorrect information. I had to download Datasheet and write a patch for them.
After writing the patch, I dug up one interesting parameter: average_number_of_erases/maximum_number_of_erases = 35000/45000. But after reading that MLC cells can only withstand 10,000 cycles, I decided that these parameters did not mean exactly what I thought they meant, and I gave up on them.

Chronicle of the fall

Suddenly, inexplicable things began to happen while working, for example, new programs did not start. Out of curiosity, I looked at that same S.M.A.R.T. parameter, it was already 37000/50000 (+2000/5000 in three days). It was no longer possible to restart; the file system of the main partition could not be read.
I started from the compact and started checking. The check showed a lot of broken nodes. During the repair process, the utility began testing for bad sectors and marking them. It all ended the next day with the following result: 60GB of 64GB were marked as bad.
Note: In SSD hard drives, a cell is considered broken if new information cannot be written there. Reading from such a cell will still be possible. Using this, run the utility badblocks in read-only mode, it is unlikely to find anything.

I decided to run the flashing utility, because it not only flashes, but also reformats the disk. The utility started formatting, groaned and reported that the reasonable permissible number of bad sectors had been exceeded, and that there were failures, so it was not possible to complete the formatting.
After this, the disk began to be identified as a disk with a very strange name, model number and size of 4GB. And, in the future, no one sees it except for specialized utilities.
I wrote a letter in support of the manufacturer. They recommended that I reflash it, and if it doesn’t work, then return it to the seller. The warranty is still 2 years old, so I'll give it a try.
I conclude this section with thanks to Steve Wozniak, who taught me how to make periodic backups.

What happened

To be honest, I don’t know myself. I assume the following: S.M.A.R.T. I didn’t lie and the cells were indeed worn out (this is indirectly confirmed by the backup that I made two days before the fall; when unpacking it showed that the creation dates of some files had been reset). And when checking for bad sectors, the disk controller simply allowed all cells to be marked as bad, in which the permissible number of write cycles was exceeded.

What to do if you have an SSD

Windows
Install Windows 7, everything is optimized for such disks as much as possible. Also install a lot of RAM.
MacOs
Most likely, only those computers that will be immediately sold with SSD are optimized.
FreeBSD
Install 9.0. Read tips for Linux, think about what you can do with them.
Linux
  • Install kernel 2.6.33, which has optimization for such disks in the form of the TRIM command.
  • Increase memory so that you can safely disable swap.
  • Set for mounted partitions noatime.
  • Used a copy-on-write file system or an unjournaled file system (such as ext2).
    At the moment, copy-on-write FS is quite difficult to use. ZFS currently only works through FUSE. And nilfs and btrfs swear when mounting that their format has not yet been finalized.
  • Turn on NOOP IO Scheduler it will allow you not to perform unnecessary useless actions for the SSD.
  • Conceptually correct, but it will not help the disk much - transferring temporary files to tmpfs.
  • For systems that intensively write to the log, it should be stored in a different location. This is mainly true for servers for which the log server can be raised without any problems.
  • Get S.M.A.R.T. utilities that correctly display the state of the SSD disk, so that you can periodically monitor the disk.
  • Just spare the disk. And for the gentushniks, this additionally means not “reassembling the world.”

Questions for the habra community

  • Is it really possible to kill MLC cells in 2 months? Of course, I understand that I didn’t spare the disk, but I didn’t do anything supernatural, I just worked as usual.
  • Is this a warranty case?

UPD: The disk I had was Transcend TS64GSSD25S-M.
UPD2: There are very good reviews in the comments about Intel and SAMSUNG SSDs. In addition, people are surprised how you can kill an SSD with a broom so quickly. Believe me, I was perplexed in exactly the same way. However, it is possible that this is a hastily tailored SSD series and can be killed quickly.
UPD3: In the comments and