What is hard disk cache memory? What is a hard disk buffer and why is it needed? What is a cache on a hard disk

Let me remind you that the Seagate SeaTools Enterprise utility allows the user to manage the caching policy and, in particular, switch the latest Seagate SCSI drives between two different caching models - Desktop Mode and Server Mode. This item in the SeaTools menu is called Performance Mode (PM) and can take two values ​​- On (Desktop Mode) and Off (Server Mode). The differences between these two modes are purely software - in the case of Desktop Mode, the hard drive cache is divided into a fixed number of segments of constant (equal) size, and then they are used to cache read and write accesses. Moreover, in a separate menu item, the user can even assign the number of segments himself (manage cache segmentation): for example, instead of the default 32 segments, enter a different value (in this case, the volume of each segment will decrease proportionally).

In the case of Server Mode, buffer (disk cache) segments can be dynamically (re)assigned, changing their size and number. The microprocessor (and firmware) of the disk itself dynamically optimizes the number (and capacity) of cache memory segments depending on the commands received for execution on the disk.

Then we were able to find out that using the new Seagate Cheetah drives in “Desktop” mode (with fixed segmentation by default - into 32 segments) instead of the default “Server” with dynamic segmentation can slightly increase disk performance in a number of tasks more typical for a desktop computer or media servers. Moreover, this increase can sometimes reach 30-100% (!) depending on the type of task and disk model, although on average it is estimated at 30%, which, you see, is also not bad. Among such tasks are routine work of a desktop PC (WinBench, PCmark, H2bench tests), reading and copying files, defragmentation. At the same time, in purely server applications, the performance of drives almost does not drop (if it does drop, it is insignificant). However, we were able to observe a noticeable gain from using Desktop Mode only on the Cheetah 10K.7 drive, while its older sister Cheetah 15K.4 turned out to be almost indifferent in which mode to work on desktop applications.

Trying to further understand how the cache segmentation of these hard drives affects performance in various applications and which segmentation modes (how many memory segments) are more beneficial when performing certain tasks, I investigated the effect of the number of cache segments on the performance of the Seagate Cheetah drive 15K.4 in a wide range of values ​​- from 4 to 128 segments (4, 8, 16, 32, 64 and 128). The results of these studies are presented to your attention in this part of the review. I would like to emphasize that these results are interesting not only for this drive model (or Seagate SCSI drives in general) - cache memory segmentation and selection of the number of segments is one of the main areas of firmware optimization, including desktop drives with an ATA interface, which are now also mainly equipped with an 8 MB buffer. Therefore, the drive performance results described in this article in various tasks depending on the segmentation of its cache memory are also relevant to the desktop ATA drive industry. And since the test methodology was described in the first part, let’s move on directly to the results themselves.

However, before moving on to discussing the results, let's take a closer look at the design and operation of the cache memory segments of the Seagate Cheetah 15K.4 drive in order to better understand what we are talking about. Of the eight megabytes for the cache memory itself (that is, for caching operations), 7077 KB are available here (the rest is service area). This area is divided into logical segments (Mode Select Page 08h, byte 13), which are used for reading and writing data (for performing read-ahead functions from platters and lazy writes to the disk surface). To access data on magnetic plates, segments use logical addressing of drive blocks. Disks in this series support a maximum of 64 cache segments, with the length of each segment equal to the entire number of disk sectors. The amount of available cache memory is apparently distributed equally between the segments, so if there are, say, 32 segments, then the size of each segment is approximately 220 KB. With dynamic segmentation (in PM=off mode), the number of segments can be changed by the hard drive automatically depending on the flow of commands from the host.

Server and desktop applications require different cache operations from disks for optimal performance, so it is difficult to provide a single configuration to best perform these tasks. According to Seagate, desktop applications require the cache to be configured to quickly respond to repeated requests for large numbers of small segments of data without the delay of read-aheads of adjacent segments. In server applications, on the other hand, the cache must be configured to accommodate large volumes of sequential data in non-repeating requests. In this case, the cache's ability to store more data from contiguous segments when read ahead is more important. Therefore, for Desktop Mode the manufacturer recommends using 32 segments (early versions of Cheetah used 16 segments), and for Server Mode the adaptive number of segments starts from only three for the entire cache, although it can increase during operation. In our experiments regarding the influence of the number of segments on performance in various applications, we will limit ourselves to the range from 4 segments to 64 segments, and as a test, we will “run” the disk also with 128 segments installed in the SeaTools Enterprise program (the program does not indicate that this the number of segments on this disk is not allowed).

Physical parameters test results

There is no point in showing linear read speed graphs for different numbers of cache memory segments - they are the same. But based on the speed of the Ultra320 SCSI interface measured by tests, one can observe a very interesting picture: with 64 segments, some programs begin to incorrectly determine the speed of the interface, reducing it by more than an order of magnitude.

According to the measured average access time, the differences between different numbers of cache memory segments become more noticeable - as segmentation decreases, the average read access time measured under Windows increases slightly, and significantly better readings are observed in the PM=off mode, although it can be argued that the number there are very few segments or, conversely, very large, based on these data it is difficult. It is possible that the disk in this case simply begins to ignore the prefetch when reading in order to avoid additional delays.

You can try to judge the effectiveness of the disk firmware lazy write algorithms and caching of written data in the drive buffer by how the average operating system-measured access time when writing relative to reading drops when write-back caching of the drive is enabled (it was always enabled in our tests). To do this, we usually use the results of the C"T H2benchW test, but this time we will complement the picture with a test in the IOmeter program, the reading and writing patterns for which used 100% random access in blocks of 512 bytes with a unit request queue depth. (Of course, you should not think that the average write access time in the two diagrams below actually reflects this physical characteristics of drives! This is just a parameter measured programmatically using a test, by which one can judge the effectiveness of write caching in the disk buffer. The actual average write access time declared by the manufacturer for the Cheetah 15K.4 is 4.0+2.0=6.0 ms). By the way, anticipating questions, I note that in this case (that is, when deferred writing is enabled on the disk), the drive reports to the host about the successful completion of the write command (GOOD status) immediately as soon as they are written to the cache memory, and not directly to the magnetic media . This is the reason for the lower value of the externally measured average write access time than for a similar parameter when reading.

Based on the results of these tests, there is a clear dependence of the efficiency of caching random recording of small blocks of data on the number of cache segments - the more segments, the better. With four segments, efficiency drops sharply and the average access time for writing increases almost to the values ​​for reading. And in the “server mode” the number of segments in this case is obviously close to 32. The cases of 64 and “128” segments are completely identical, which confirms the software limitation at the level of 64 segments from above.

It is interesting that the IOmeter test in the simplest patterns for random access in blocks of 512 bytes gives exactly the same values ​​when writing as the C"T H2BenchW test (with an accuracy of literally hundredths of a millisecond), while when reading IOmeter showed a slightly inflated result in everything segmentation range - perhaps 0.1-0.19 ms difference with other random access time tests while reading due to some “internal” reasons for the IOmeter (or a block size of 512 bytes instead of 0 bytes, as is ideally required for such measurements). However, the reading results for IOmeter practically coincide with those for the disk test of the AIDA32 program.

Application performance

Let's move on to testing drive performance in applications. And first of all, let's try to find out how well the disks are optimized for multi-threaded operation. To do this, I traditionally use tests in the NBench 2.4 program, where 100 MB files are written to disk and read from it by several simultaneous threads.

This diagram allows us to judge the effectiveness of multi-threaded lazy writing algorithms for hard drives in real (and not synthetic, as was the case in the diagram with average access time) conditions when the operating system is working with files. The leadership of both Maxtor SCSI drives when recording with several simultaneous streams is beyond doubt, however, with Chita we are already seeing a certain optimum in the area between 8 and 16 segments, while at higher and lower values ​​the disk speed on these tasks drops. For Server Mode the number of segments is obviously 32 (with good accuracy :)), and "128" segments is actually 64.

When it comes to multi-threaded reading, the situation for Seagate drives clearly improves compared to Maxtor drives. As for the influence of segmentation, as with recording, we observe a certain optimum closer to 8 segments (during recording it was closer to 16 segments), and with very high segmentation (64) the disk speed decreases significantly (as with recording) . It’s gratifying that Server Mode here “monitors the host’s market” and changes the segmentation from 32 when writing to ~8 when reading.

Now let’s see how the drives behave in the “old” but still popular Disk WinMark 99 tests from the WinBench 99 package. Let me remind you that we conduct these tests not only for the “beginning”, but also for the “middle” (in terms of volume) physical media for two file systems, and the diagrams show average results. Of course, these tests are not “profile” for SCSI drives, and in presenting their results here we rather pay tribute to the test itself and to those who are accustomed to judging disk speed using WinBench 99 tests. As a “consolation”, we note that these tests will show us with a certain degree of certainty what the performance of these enterprise drives is when performing tasks more typical of a desktop computer.

Obviously, there is an optimal segmentation here too, and with a small number of segments the disk looks inexpressive, and with 32 segments it looks best (perhaps this is why the Seagate developers “shifted” the default Desktop Mode setting from 16 to 32 segments). However, for Server Mode in office (Business) tasks, segmentation is not entirely optimal, while for professional (High-End) productivity segmentation is more than optimized, noticeably outperforming even the optimal “permanent” segmentation. Apparently, it is during the test execution that it changes depending on the command flow and due to this, a gain in overall performance is obtained.

Unfortunately, such optimization “during the test” is not observed for more recent “track” complex tests assessing “desktop” disk performance in the PCMakr04 and C"T H2BenchW packages.

On both (or rather, on 10 different) “activity tracks”, the intelligence of Server Mode is noticeably inferior to the optimal constant segmentation, which for PCmark04 is approximately 8 segments, and for H2benchW - 16 segments.

For both of these tests, 4 cache memory segments turn out to be very undesirable, and 64 too, and it is difficult to say which one gravitates more toward in choosing Server Mode in this case.

In contrast to these, of course, still synthetic (albeit very similar to reality) tests, there is a completely “real” test of disk speed with a temporary file of the Adobe Photoshop program. Here the situation is much more transparent - the more segments, the better! And Server Mode almost “caught” this, using 32 segments for its work (although 64 would have been a little better).

Tests in Intel Iometer

Let's move on to tasks that are more typical for SCSI drive usage profiles - the operation of various servers (DataBase, File Server, Web Server) and Workstation according to the corresponding patterns in the Intel IOmeter program version 2003.5.10.

Maxtor is the most successful at simulating a database server, and for Seagate it is most profitable to use Server Mode, although in essence the latter is very close to 32 persistent segments (about 220 KB each). Less or more segmentation in this case turns out to be worse. However, this pattern is too simple in terms of the type of requests - let's see what happens for more complex patterns.

When simulating a file server, adaptive segmentation is again in the lead, although the lag behind it by 16 permanent segments is negligible (32 segments are a little worse here, although they are also quite worthy). With small segmentation, deterioration is observed on a large command queue, and with too large a queue (64), any queue is generally contraindicated - apparently, in this case, the size of the cache sectors is too small (less than 111 KB, that is, only 220 blocks on the media) to effectively cache acceptable data volumes.

Finally, for the Web server we see an even more interesting picture - with a non-unit command queue, Server Mode is equivalent anyone segmentation level, except 64, although at the single level it is slightly better than everyone else.

As a result of geometric averaging of the server loads shown above by patterns and request queues (without weighting coefficients), we find that adaptive sharding is best for such tasks, although 32 constant segments lag slightly behind, and 16 segments also look good overall. In general, Seagate's choice is quite understandable.

As for the “workstation” pattern, Server Mode is clearly the best here.

And the optimum for constant segmentation is at the level of 16 segments.

Now - our patterns for IOmeter, which are closer in purpose to desktop PCs, although they are definitely indicative of enterprise drives, since even in “deeply professional” systems, hard drives read and write large and small files the lion’s share of the time, and also sometimes copy files. And since the nature of calls in these patterns in these patterns in the IOmeter test (at random addresses within the entire disk volume) is more typical for server-class systems, the importance of these patterns for the disks under study is higher.

Reading large files is again better in Server Mode, with the exception of an incomprehensible failure at QD=4. However, a small number of large segments is clearly preferable for the disk in these operations (which, in principle, is predictable and is in excellent agreement with the results for multi-threaded file reading, see above).

Sporadic record large files, on the contrary, are still “too tough” for the intelligence of Server Mode, and here it is more profitable to have constant segmentation at the level of 8-16 segments, as with multi-threaded recording of files, see above. Separately, we note that large cache segmentation - at the level of 64 segments - is extremely harmful in these operations. However, it is useful for small-file read operations with a large queue of requests:

I think this is what Server Mode uses to select adaptive mode - their graphs are very similar.

At the same time, when writing small files to random addresses, 64 segments again fail, and Server Mode here is inferior to constant segmentation with a level of 8-16 segments per cache, although Server Mode’s efforts to use optimal settings are visible (only with 32-64 segments per queue 64 bad luck happened ;)).

Copying large files is a clear failure of Server Mode! Here it is clearly more profitable to segment with level 16 (this is optimal, since 8 and 32 are worse at queue 4).

As for copying small files, 8-16-32 segments are almost equivalent here, outperforming 64 segments (oddly enough), and Server Mode is a little weird.

Based on the results of geometric averaging of data for random reading, writing and copying of large and small files, we find that the best average result is obtained by constant segmentation with a level of only 4 segments per cache (that is, segment sizes of more than 1.5 MB!), while 8 and 16 segments are approximately equivalent and are almost not behind 4 segments, but 64 segments are clearly contraindicated. Adaptive Server Mode on average was only slightly inferior to constant segmentation - a loss of one percent can hardly be considered noticeable.

It remains to note that when simulating defragmentation, we observe approximately equality of all levels of constant segmentation and a slight advantage of Server Mode (by the same 1%).

And in the pattern of streaming read-writes in large and small blocks, it is slightly more advantageous to use a small number of segments, although again the differences in the performance of cache memory configurations here, oddly enough, are homeopathic.

conclusions

Having carried out a more detailed study of the influence of cache memory segmentation on the performance of the Seagate Cheetah 15K.4 drive in various tasks in the second part of our review, I would like to note that it was not without reason that the developers called the caching modes the way they called them: in Server Mode, segmentation is indeed often adapted cache memory for the task being performed, and this sometimes leads to very good results - especially when performing “heavy” tasks, including server patterns in Intel IOmeter, and the High-End Disk WinMark 99 test, and random reading of small blocks throughout disk... However, often the choice of cache segmentation level in Server Mode turns out to be suboptimal (and requires further work to improve the criteria for analyzing the host command flow), and then Desktop Mode comes out ahead with fixed segmentation at the level of 8, 16 or 32 segments per cache. Moreover, depending on the type of task, sometimes it is more profitable to use 16 and 32, and sometimes - 8 or only 4 memory segments! Among the latter are multi-threaded reads and writes (both random and sequential), “track” tests like PCMark04, and streaming tasks with simultaneous reading and writing. Although the “synthetics” for random write access clearly show that the efficiency of delayed write (to random addresses) decreases significantly as the number of segments decreases. That is, there is a struggle between two trends - and that is why, on average, it is more effective to use 16 or 32 segments per 8 MB buffer. When doubling the buffer size, one can predict that it is more profitable to keep the number of segments at 16-32, but by proportionally increasing the capacity of each segment, the average drive performance can increase significantly. Apparently, even cache segmentation with 64 segments, which is currently ineffective in most tasks, can turn out to be very useful when the buffer size is doubled, while using 4 or even 8 segments in this case will become ineffective. However, these conclusions also strongly depend on what blocks the operating system and applications prefer to operate with the drive, and what size files are used. It is quite possible that when the environment changes, the optimal cache segmentation may shift in one direction or another. Well, we wish Seagate success in optimizing the “intelligence” of Server Mode, which, to a certain extent, can smooth out this “system dependence” and “task dependence” by learning how to best select the most optimal segmentation depending on the flow of host commands.

Today, a common storage device is a magnetic hard drive. It has a certain amount of memory designed to store basic data. It also has a buffer memory, the purpose of which is to store intermediate data. Professionals call the hard disk buffer the term “cache memory” or simply “cache”. Let's figure out why the HDD buffer is needed, what it affects and what size it is.

The hard disk buffer helps the operating system temporarily store data that was read from the main memory of the hard drive, but was not transferred for processing. The need for transit storage is due to the fact that the speed of reading information from the HDD drive and the OS throughput vary significantly. Therefore, the computer needs to temporarily store data in a “cache” and only then use it for its intended purpose.

The hard disk buffer itself is not separate sectors, as incompetent computer users believe. It is a special memory chip located on the internal HDD board. Such chips can operate much faster than the drive itself. As a result, they cause an increase (by several percent) in computer performance observed during operation.

It is worth noting that the size of “cache memory” depends on the specific disk model. Previously, it was about 8 megabytes, and this figure was considered satisfactory. However, with the development of technology, manufacturers were able to produce chips with larger amounts of memory. Therefore, most modern hard drives have a buffer whose size varies from 32 to 128 megabytes. Of course, the largest “cache” is installed in expensive models.

What impact does a hard drive buffer have on performance?

Now we’ll tell you why the size of the hard drive buffer affects computer performance. Theoretically, the more information is in the “cache memory”, the less often the operating system will access the hard drive. This is especially true for a work scenario where a potential user is processing a large number of small files. They simply move to the hard drive buffer and wait there for their turn.

However, if the PC is used to process large files, then the “cache” loses its relevance. After all, information cannot fit on microcircuits, the volume of which is small. As a result, the user will not notice an increase in computer performance, since the buffer will be practically not used. This happens in cases where the operating system will run programs for editing video files, etc.

Thus, when purchasing a new hard drive, it is recommended to pay attention to the size of the “cache” only in cases where you plan to constantly process small files. Then you will really notice an increase in the performance of your personal computer. But if the PC is used for ordinary everyday tasks or processing large files, then you don’t need to attach any importance to the clipboard.

Cache memory- This is ultra-fast memory, which has increased performance compared to RAM.

Cache memory complements the functional value of RAM.
When a computer is running, all calculations occur in the processor, and the data for these calculations and their results are stored in RAM. The speed of the processor is several times higher than the speed of information exchange with RAM. Considering that between two processor operations one or more operations can be performed on slower memory, we find that the processor must be idle from time to time and the overall speed of the computer drops.

The cache memory is controlled by a special controller, which, by analyzing the program being executed, tries to predict what data and commands the processor will most likely need in the near future, and pumps them into the cache memory, i.e. The cache controller loads the necessary data from RAM into the cache memory, and returns, when necessary, data modified by the processor to RAM.

The processor's cache memory performs approximately the same function as RAM. Only the cache is memory built into the processor and is therefore faster than RAM, partly due to its position. After all, the communication lines running along the motherboard and the connector have a detrimental effect on speed. The cache of a modern personal computer is located directly on the processor, thanks to which it has been possible to shorten communication lines and improve their parameters.

Cache memory is used by the processor to store information. It buffers the most frequently used data, due to which the time of the next access to it is significantly reduced.

All modern processors have a cache (in English - cache) - an array of ultra-fast RAM, which is a buffer between the relatively slow system memory controller and the processor. This buffer stores blocks of data that the CPU is currently working with, thereby significantly reducing the number of processor calls to the extremely slow (compared to the processor speed) system memory.

This significantly increases the overall performance of the processor.
Moreover, in modern processors the cache is no longer a single memory array, as before, but is divided into several levels. The fastest, but relatively small in size, first-level cache (denoted as L1), with which the processor core operates, is most often divided into two halves - the instruction cache and the data cache. The second level cache interacts with the L1 cache - L2, which, as a rule, is much larger in volume and is mixed, without dividing into an instruction cache and a data cache.

Some desktop processors, following the example of server processors, also sometimes acquire a third-level L3 cache. The L3 cache is usually even larger in size, although somewhat slower than the L2 (due to the fact that the bus between L2 and L3 is narrower than the bus between L1 and L2), but its speed, in any case, is disproportionately higher than the speed system memory.

There are two types of cache: exclusive and non-inclusive cache. In the first case, information in caches of all levels is clearly demarcated - each of them contains exclusively original information, while in the case of a non-exclusive cache, information can be duplicated at all caching levels. Today it is difficult to say which of these two schemes is more correct - both have both minuses and pluses. The exclusive caching scheme is used in AMD processors, while the non-exclusive one is used in Intel processors.

Exclusive cache memory

Exclusive cache memory assumes the uniqueness of the information located in L1 and L2.
When reading information from RAM into the cache, the information is immediately entered into L1. When L1 is full, information is transferred from L1 to L2.
If, when the processor reads information from L1, the necessary information is not found, then it is looked for in L2. If the necessary information is found in L2, then the first and second level caches exchange lines with each other (the “oldest” line from L1 is placed in L2, and the required line from L2 is written in its place). If the necessary information is not found in L2, then the access goes to the RAM.
The exclusive architecture is used in systems where the difference between the volumes of the first and second level caches is relatively small.

Inclusive cache

An inclusive architecture involves duplication of information found in L1 and L2.
The scheme of work is as follows. When copying information from RAM to the cache, two copies are made, one copy is stored in L2, the other copy is stored in L1. When L1 is completely full, the information is replaced according to the principle of removing the “oldest data” - LRU (Least-Recently Used). The same thing happens with the second level cache, but since its volume is larger, the information is stored in it longer.

When the processor reads information from the cache, it is taken from L1. If the necessary information is not in the first level cache, then it is looked for in L2. If the necessary information is found in the second level cache, it is duplicated in L1 (using the LRU principle), and then transferred to the processor. If the necessary information is not found in the second level cache, then it is read from RAM.
The inclusive architecture is used in those systems where the difference in the size of the first and second level caches is large.

However, cache memory is ineffective when working with large amounts of data (video, sound, graphics, archives). Such files simply do not fit in the cache, so you have to constantly access RAM, or even the HDD. In such cases, all the advantages disappear. That is why budget processors (for example, Intel Celeron) with a reduced cache are so popular that the performance in multimedia tasks (related to processing large amounts of data) is not greatly affected by the cache size, even despite the reduced operating frequency Intel Celeron buses.

Hard drive cache

As a rule, all modern hard drives have their own RAM, called cache memory or simply cache. Hard drive manufacturers often refer to this memory as buffer memory. The size and structure of the cache differ significantly among manufacturers and for different models of hard drives.

Cache memory acts as a buffer for storing intermediate data that has already been read from the hard drive, but has not yet been transferred for further processing, as well as for storing data that the system accesses quite often. The need for transit storage is caused by the difference between the speed of reading data from the hard drive and the system throughput.

Typically, cache memory is used for both writing and reading data, but on SCSI drives it is sometimes necessary to force write caching to be enabled, so disk write caching is usually disabled by default for SCSI. Although this contradicts the above, the size of the cache memory is not decisive for improving performance.

It is more important to organize data exchange with the cache to increase disk performance as a whole.
In addition, performance in general is affected by the operating algorithms of the control electronics, which prevent errors when working with the buffer (storing irrelevant data, segmentation, etc.)

In theory: the larger the cache memory, the higher the likelihood that the necessary data is in the buffer and there will be no need to “disturb” the hard drive. But in practice, it happens that a disk with a large amount of cache memory is not much different in performance from a hard disk with a smaller amount; this happens when working with large files.

The hard drive cache is a temporary storage of data.
If you have a modern hard drive, then the cache is not as important as it used to be.
You will find more details about what role the cache plays in hard drives and what the cache size should be for fast computer operation.

What is cache used for?

The hard drive cache allows you to store frequently used data in a specially designated location. Accordingly, the cache size determines the capacity of the stored data. Thanks to a large cache, hard drive performance can increase significantly, because frequently used data can be loaded into the hard drive cache, which will not require physical reading when requested.
Physical reading is a direct access to sectors of the hard drive. It takes a fairly significant period of time, measured in milliseconds. At the same time, the hard drive cache transfers information upon request approximately 100 times faster than if the information was requested by physically accessing the hard drive. Thus, the hard disk cache allows the hard drive to work even if the host bus is busy.

Along with the importance of the cache, we must not forget about other characteristics of the hard drive, and sometimes the cache size can be neglected. If you compare two hard drives of the same size with different cache sizes, for example 8 and 16 MB, then you should only opt for a larger cache if their price difference is approximately $7-$12. Otherwise, it makes no sense to overpay money for a larger cache volume.

It is worth looking at the cache if you are buying a gaming computer and there are no small things for you, in which case you also need to look at the revolutions.

Summarizing all of the above

The advantages of the cache are that data processing does not take a long time, whereas during physical access to a certain sector, time must pass until the disk head finds the desired piece of information and starts reading. In addition, hard drives with large cache sizes can significantly relieve the computer processor because physical access is not required to request information from the cache. Accordingly, the processor work here is minimal.

The hard drive cache can be called a real accelerator, because its buffering function really allows the hard drive to work much faster and more efficiently. However, in the context of the rapid development of high technologies, the former value of the hard drive cache is not very important since most modern models use an 8 or 16 MB cache, which is quite enough for optimal operation of the hard drive.

Today there are hard drives with even larger 32 MB cache, but as we said, it's only worth paying extra for the difference if the difference in price corresponds to the difference in performance.

Which hard drive to choose. The hard drive also needs to be chosen correctly so that it is fast, quiet and reliable. Unfortunately, before you know it, the disk is already filled to capacity. There are users who, even after several years, still have enough disk space to work for another 10 years.

But this is usually the exception. Many people have a catastrophic lack of hard drive space, and sometimes it just goes nowhere. Nowadays, a computer is not just a typewriter. Many users are engaged in serious projects on it and earn good money from it. And the hard drive, as you know, stores a lot of useful information, so you shouldn’t buy it anyhow.

It all depends on what you will be doing on your computer. It is best if your computer has not one hard drive, but two or even three. Read how to install such a disk. On the main drive you will have the operating system, and on the rest it is better to store your data.

Usually there is a catastrophic lack of space on the hard drive. Don't think you're the only one. Now I’m even surprised how 10 GB was ever enough for me. The most annoying thing is that all the files are necessary and expensive, and you don’t want to delete anything at all.

Any device has its own parameters and resources, and the computer hard drive is no exception. If you just go to the store and ask for a disc, then they may advise you not at all on what you need, but most likely on what is more expensive. Why overpay if you can take the same or the same with the remaining money.

WHERE ELSE CAN YOU STORE YOUR DATA EXCEPT THE HARD DISK?

Previously, you could write your data onto a blank (CD or DVD) and sleep peacefully. Nowadays, everyone has so much information on their computers that it is no longer possible to copy everything onto a CD. At best, you can rewrite something most important.

And it's still not very convenient. You won’t carry around a whole briefcase with CDs or DVDs and insert one after another into the drive to find the information you need.

You can buy a small but large-volume external drive and carry it with you. But, again, there is no guarantee that it will not “glitch” someday. And then goodbye valuable information. This happened to me recently. But that’s not about that now.

External hard drive 2.5′

Hard drive capacity

The operating system does not require a large disk space. Since the minimum disk size on sale now is 500 GB, this is enough for you. But another disk, if you constantly download something from the Internet, you need to take as large a volume as possible.

Spindle speed

For the operating system you need a disk with a good spindle speed. At low speeds, your operating system will slow down, no matter what memory it has, and no matter how fast the microprocessor is.

Everything should be in a complex. Otherwise, you will throw money down the drain. You can't skimp on your hard drive!

Modern hard drives (HDD) 2.5 and 3.5" have a spindle speed of 5400 or 7200 RPM. The higher the spindle speed, the higher the speed of the disk.

For a home computer, the speed of the hard drive on which the operating system, graphics programs and your games will be installed must be at least 7200 rpm.

If you are buying a drive for the office, then 5400 rpm is enough. The same speed is also suitable for data storage, i.e. a second hard drive, especially since it is cheaper.

There are drives with a SAS or SCSI interface, with speeds of 10,000 and 15,000 rpm, but they are used for servers and are not cheap.

SCSI hard drive

But if you have an old computer and an IDE hard drive, then there is not much choice, and you can forget about good disk spindle speed. And finding such a disk is already problematic.

How to determine whether a hard drive is old or not

If your disk has a wide cable, then this is an IDE interface. They are no longer used in new computers, and the speed of these disks is low.

Cable for connecting IDE disk

New computers are equipped with hard drives with SATA, SATA 2 and SATA 3 interfaces.

Cable for connecting a SATA drive

The data transfer speed of a SATA drive is 50% faster than an IDE drive.

SATA, SATA 2 and SATA 3 drives are interchangeable. But the data transfer speed of SATA 3 is much better than that of SATA.

Please note that SATA and SATA2 drive cable are not suitable for SATA3 drive. Their frequency characteristics are different, although the connectors are the same and they will still work. The cable (cable) for SATA3 is thicker and usually black.

It is also important to know what type of SATA hard drive your motherboard supports, otherwise the drive will not operate at full capacity. But this is not critical. But if the motherboard is very old, then it may not support a SATA drive at all, i.e. there will be no connector for it.

Buffer size or cache size

The next point to select a disk is cache memory size(buffer memory). There are cache sizes of 8, 16, 32, 64 and 128 MB. The higher the number, the better the data processing speed.

For data storage, 16 MB is suitable, and for the system it is better to buy from 32 MB. If you are involved in graphics, then for programs such as Photoshop and AutoCAD it is better to take a hard drive with cache memory - 64 or 128 MB, especially since the difference in price between them is not significant.

Average linear read speed

Linear read speed means the speed of continuous reading of data from the surface of the platters (HDD) and is the main characteristic that reflects the real speed of the disk. It is measured in megabytes per second (Mb/s).

Modern HDD drives with a SATA interface have an average linear read speed of 100 to 140 MB/s.

The linear reading speed of HDD disks depends on the density of data recording on the magnetic surface of the platters and the quality of the disk mechanics.

Access time

This is the speed at which the disk finds the required file after the operating system or any program accesses it. Measured in milliseconds (ms). This parameter has a big impact on disk performance when working with small files, but not a big impact when working with large ones.

Hard drives have access times from 12 to 18 ms. A good indicator is an access time of 13-14 ms (depending on the quality (accuracy) of the disk mechanics).

Now there are new hard drives on sale - SSDs consisting of only chips, but they are very expensive and therefore are not intended for storing data. They are only good for running programs. SSD drives do not have a spindle, so they are completely silent, do not heat up, and are very fast.

And the most important! Try not to install hard drives next to each other. It is better if there is more space around them, because... During operation, they become very hot and may fail due to overheating.

Better yet, especially in summer, is to cool them by opening the computer lid and pointing a fan at them. Overheating is just as destructive for a hard drive as it is for a video card and microprocessor.

Any disk manufacturer has more expensive and cheaper disks. But this does not mean that companies are slacking. Just one product for state employees, and the second for more affluent people. Both discs are made to last, but the parts are made of different materials, which have different wear periods.

Hard drive manufacturers

The main manufacturers of hard disk drives (HDD) are:

Fujitsu- a Japanese company, previously famous for the high quality of its products, is currently represented by a small number of models and is not very popular.

Hitachi– a Japanese company, both previously and now, is distinguished by the stable quality of hard drives. By purchasing a Hitachi hard drive, you can’t go wrong, getting good quality at an affordable price.

Samsung- this Korean company. Today, Samsung produces the fastest and highest quality HDDs. Their price may be a little higher than the competition, but it's worth it.

Seagate is an American company, a pioneer in the field of technology. Now the quality of this company's hard drives, unfortunately, leaves much to be desired.

Toshiba- Japanese company. Currently represented by a small number of models on our market. In this regard, problems may arise in the service of such manufacturers.

Western Digital (WD) is an American company specializing in the production of hard drives. Recently, the disks of this company do not stand out with outstanding characteristics, and are very noisy.

It is better to choose between Samsung or Hitachi, as they are the highest quality, fastest and most stable.

So, the main characteristics of hard drives:

  • Spindle speed
  • HDD capacity
  • Cache size
  • Average linear read speed
  • Noise level
  • Manufacturer

Now you know which hard drive to choose. Unfortunately, stores don't always have a selection, so I prefer to order online. In big cities there is more choice. Therefore, do not be lazy and study their main characteristics.