Technologies for recording data on hard drives. Hard drives - the evolution of recording technology

The progress of hard drives themselves - if you do not take into account the increase in the intelligence of their controllers and the transition to object-based solutions at the system level - occurs in the following directions: drives are becoming more compact, their capacity and data access speed are increasing. Reducing geometric dimensions will be possible thanks to the transition to vertical recording and the Serial Attached SCSI interface.

Vertical recording

One of the solutions that will ensure a further annual doubling of recording density and open up opportunities for improving disk physics over the next decade will be the transition from parallel To perpendicular records. The difference between the two recording methods is how the magnetic domains on the disk platter are oriented - horizontally or vertically.

In any case, the carriers binary numbers are magnetic domains grouped into so-called “grains”. The main characteristic of the recording method is its integral density(areal density), which consists of the product of linear density, determined by the number of bits per inch of a track (Bits per Inch, BPI), and the number of tracks per inch of diameter (Tracks per Inch, TPI). At a given rotation speed, to quadruple the integral density, it is sufficient to double both TPI and BPI. In fact, the process of increasing density is not at all so linear: the reduction in grain size is influenced by an increase in disk rotation speed, which is necessary to increase the data exchange rate, and a decrease in the signal-to-noise ratio, and other physical factors. The observed quasi-linear growth remained possible until the technology came close to the superparamagnetic limit, which makes subsequent increases in density impossible using traditional methods. The essence of this limitation is that miniaturization of grains sooner or later leads to the fact that the carrier loses its stability and turns into a chaotically located multitude of magnetized particles, randomly changing their direction. The superparamagnetic effect occurs when the energy required to change the magnetic moment becomes comparable to the thermal environment.

Modern parallel recording technologies are based on two types of effects, supermagnetoresistance effect(giant magnetoresistive, GMR) and tunnel magnetoresistive effect(tunnel magnetoresistance, TMR). Both TMR and GMR are tools for improving linear recording, but their capabilities have approached the so-called “superparamagnetic limit.” To maintain the current rate of increase in density, a transition to vertical recording is necessary, where the domains are oriented not along the surface of the disk platter, but orthogonally to it (Fig. 1), therefore, they can be packed more densely.

The simplicity of this solution is apparent. In reality, replacing one type of recording with another involves overcoming serious technical problems. In particular, it is necessary to ensure a lower flight height of the head and give the heads a special design that would ensure vertical recording, and a special material that is soft from the point of view of magnetism should be used as a substrate. One way or another, according to forecasts, already in 2007 these difficulties will be overcome, and hard drives with vertical recording will begin to enter the market.

SCSI Serialization

The first version of the parallel SCSI interface was introduced by Shugart Associates in 1979 under the name SASI (Shugart Associates System Interface). After development in collaboration with NCR Corporation, which today exists as an independent company called Engenio, it was adopted as an ANSI standard in 1986. Like any parallel bus, SCSI bandwidth is equal to the bus clock frequency multiplied by the size of the data transferred per clock cycle. In the first versions, the bus width was one byte and the frequency was 5 MHz, respectively, the throughput was 5 MB/s. In the most “advanced” version of Ultra320 SCSI, a portion of 2 bytes is transmitted at a frequency of 80 MHz. Taking into account the use of the DDR algorithm doubling the throughput, the data transfer speed reached 320 MB/s .

By 2001, after two decades of constant improvement of the parallel version of SCSI, it became obvious that the resources of this interface had been exhausted. Then, realizing the impending impasse, a group of hard drive manufacturing companies organized a brainstorming session with the participation of leading industry experts. The result of this action was proposals for the creation of a new interface - Serial Attached SCSI. These proposals were subsequently submitted to the ANSI INCITS T10 technical committee, where by 2003 they were brought to the ANSI standard stage. The decision was not unexpected. Previously serialized in SATA form ATA interface, and the recognition of this interface occurred like an explosion; The USB interface has gained recognition at no less speed.

What caused the wave of rapid replacement parallel interfaces consistent? What is behind this, since I still remember replacing old ones consecutive interfaces parallel? Oddly enough, it took years to realize that parallelism was a temporary solution. At first it seemed that replacing one wire with several increased the speed by the corresponding number of times. The appearance of flat cables was perceived as a great technical achievement. However, although the improvement in efficiency in parallel data transfer seems to be on the surface, this technology has a serious organic flaw: it exacerbates the problem synchronization. You can increase the transfer rate up to a certain limit by increasing the bus frequency, but beyond this limit, the synchronization costs exceed the benefits of parallelism. In fact, the parallel bus only works during those short moments of time when external clock pulses arrive; the rest of the time it is simply idle. A serial channel, by definition, involves the inclusion of separating marks inside the transmitted data; the data flows in a single stream, so it is possible to fully use the channel’s bandwidth. Approximately the same problem is characteristic of modern processors; the detrimental nature of synchronization was discovered when the clock frequency began to be measured in megahertz. Fortunately, the task of replacing parallel interfaces with serial ones is much simpler than replacing synchronous processors with asynchronous ones. As a result, today SATA II drives with a capacity of up to 500 GB are capable of transferring data at speeds reaching 3 Mbit/s.

The SAS topology is original: it can be thought of as a network, but without switching, a network whose operation is supported by disk and host controllers (Target Initiator), as well as special Expander, Fanout and Edge devices. Together they form the SAS domain; the term SAS Domain refers to a network of devices and a space of unique and World Wide Names (WWN), unique identifiers. There can be up to 16,256 devices in a SAS domain.

IN this moment The data transfer speed over the SAS interface is 3 Gbit/s, in the near future this figure promises to grow to 6 Gbit/s, and by 2010 - to 10 Gbit/s. Another distinctive feature of SAS is that connectors to disks have several designs. These include the SATA compatible SFF 8482, the Infiniband compatible SFF 8470, and the 10 Gbps SFF 8088. SAS supports transport protocols Serial SCSI Protocol (SSP) and Serial ATA Tunneling Protocol (STP), if the connectors are compatible, this allows you to combine disks of different types in one drive.

Far Frontiers

Further compaction of perpendicular recording will be possible with the introduction of HAMR (Heat Assisted Magnetic Recording) technology. From the name it follows that this technology involves auxiliary heating, which is carried out using a laser (in 1 picosecond the recording area is heated to 100? C). According to various estimates, recording density may increase by one or two orders of magnitude; there is reason to hope that by 2010 it is possible to achieve 5 Tbit/sq. inch.

A radical increase in density will be possible by reducing the storage area of ​​one bit of data to one domain; in this case, the particles will be lined up in bitmap(Bit Patterned Media). Theoretically there are two alternative solutions, which would allow this to be achieved, one based on special lithography methods on the surface of the disk, the other by creating an appropriate material structure.

The first path is being taken by researchers from the IBM laboratory in Almaden together with colleagues from Stanford University. They found a way to apply a magnetic mask to the surface of the disk. To do this, the polymer is printed under very high pressure onto a silicon oxide base and then processed in a complex manner. The second path was chosen by Hitachi and Seagate. At Hitachi (Fig. 2), the new technology is called Patterned Magnetic Media, and at Seagate, a similar technology is called Self-Ordered Magnetic Array (SOMA). In both cases, the idea is to create a medium whose structure would be specified not from the outside, as in the case of lithography, but by the material’s own properties. Seagate works with FePt alloy, which allows for a perfectly smooth cellular structure with a cell size of several nanometers.

Of the "non-spinning" alternatives to hard drives, the closest is non-volatile memory based on the technology used for flash drives, and in the longer term, PST (Probe Storage Technology), which is an array of scanning microscopes (atomic force microscope, AFM).

Vertical recording

The promise of vertical recording technology has been well known for a long time. This method was first proposed in the 19th century by magnetic recording pioneer Voldemar Poulsen. And in 1955, i.e., in parallel with the RAMAC project, which became the founder of modern disks, the ADF project was also launched at IBM, which was supposed to use vertical recording. The design was supposed to provide ten times more capacity and ten times less access time. The ADF disk was intended for the Airlines Airline Reservations System (Sabre), as well as for the Stretch defense supercomputer. In August 1959, a prototype ADF disk (IBM 1301) was assembled, but the following year work on this topic was discontinued. It turned out that at the level of technology development available in the 60s, linear recording provided more high reliability, she was given preference. The credit for resuscitation goes perpendicular to the record belongs to the Japanese scientist Shunichi Iwaski; in 1976 he published the results of his research and thereby stimulated a new wave of developments.

Unlike parallel buses in serial connections such as Serial Storage Architecture (SSA), Fiber Channel(FC-AL) and Serial Attached SCSI (SAS), data is transmitted in one-bit chunks. Therefore, speed is measured in Mbit/s; in addition, they may not have a fixed clock frequency.

Hard drives, or, as they are also called, hard drives, are one of the most important components computer system. Everyone knows about this. But not everyone modern user even in principle understands how it functions HDD. The principle of operation, in general, is quite simple for a basic understanding, but there are some nuances, which will be discussed further.

Questions about the purpose and classification of hard drives?

The question of purpose is, of course, rhetorical. Any user, even the most entry-level one, will immediately answer that a hard drive (aka hard drive, aka Hard Drive or HDD) will immediately answer that it is used to store information.

In general, this is true. Do not forget that on the hard drive, in addition to the operating system and user files, there are boot sectors created by the OS, thanks to which it starts, as well as certain marks by which you can quickly find them on the disk. necessary information.

Modern models are quite diverse: regular HDDs, external hard drives, high-speed solid-state drives (SSDs), although they are not generally classified as hard drives. Next, we propose to consider the device and principle of operation hard drive, if not in full, then at least enough to understand the basic terms and processes.

Please note that there is also a special classification of modern HDDs according to some basic criteria, among which are the following:

  • method of storing information;
  • media type;
  • way of organizing access to information.

Why is a hard drive called a hard drive?

Today, many users are wondering why they call hard drives related to small arms. It would seem, what could be common between these two devices?

The term itself appeared back in 1973, when the world's first HDD appeared on the market, the design of which consisted of two separate compartments in one sealed container. The capacity of each compartment was 30 MB, which is why the engineers gave the disk the code name “30-30”, which was fully in tune with the brand of the “30-30 Winchester” gun, popular at that time. True, in the early 90s in America and Europe this name almost fell out of use, but it still remains popular in the post-Soviet space.

The structure and principle of operation of a hard drive

But we digress. The principle of operation of a hard drive can be briefly described as the processes of reading or writing information. But how does this happen? In order to understand the principle of operation of a magnetic hard drive, you first need to study how it works.

The hard drive itself is a set of plates, the number of which can range from four to nine, connected to each other by a shaft (axis) called a spindle. The plates are located one above the other. Most often, the materials for their manufacture are aluminum, brass, ceramics, glass, etc. The plates themselves have a special magnetic coating in the form of a material called platter, based on gamma ferrite oxide, chromium oxide, barium ferrite, etc. Each such plate is about 2 mm thick.

Radial heads (one for each plate) are responsible for writing and reading information, and both surfaces are used in the plates. For which it can range from 3600 to 7200 rpm, and two electric motors are responsible for moving the heads.

In this case, the basic principle of operation of a computer hard drive is that information is not recorded just anywhere, but in strictly defined locations, called sectors, which are located on concentric paths or tracks. To avoid confusion, uniform rules apply. This means that the principles of operation of hard drives, from the point of view of their logical structure, are universal. For example, the size of one sector, adopted as a uniform standard throughout the world, is 512 bytes. In turn, sectors are divided into clusters, which are sequences of adjacent sectors. And the peculiarities of the operating principle of a hard drive in this regard are that the exchange of information is carried out by entire clusters (an entire number of chains of sectors).

But how does information reading happen? The principles of operation of a hard magnetic disk drive are as follows: using a special bracket, the reading head is moved in a radial (spiral) direction to the desired track and, when rotated, is positioned above a given sector, and all heads can move simultaneously, reading the same information not only from different tracks , but also from different disks (plates). All tracks with the same serial numbers are usually called cylinders.

In this case, one more principle of hard drive operation can be identified: the closer the reading head is to the magnetic surface (but does not touch it), the higher the recording density.

How is information written and read?

Hard drives, or hard drives, were called magnetic because they use the laws of the physics of magnetism, formulated by Faraday and Maxwell.

As already mentioned, plates made of non-magnetic sensitive material are coated with a magnetic coating, the thickness of which is only a few micrometers. During operation, a magnetic field appears that has a so-called domain structure.

A magnetic domain is a magnetized region of a ferroalloy strictly limited by boundaries. Further, the principle of operation of a hard disk can be briefly described as follows: when exposed to an external magnetic field, the disk’s own field begins to be oriented strictly along the magnetic lines, and when the influence stops, zones of residual magnetization appear on the disks, in which the information that was previously contained in the main field is stored .

The reading head is responsible for creating an external field when writing, and when reading, the zone of residual magnetization, being opposite the head, creates electromotive force or EMF. Further, everything is simple: a change in EMF corresponds to one in binary code, and its absence or termination corresponds to zero. The time of change of the EMF is usually called a bit element.

In addition, the magnetic surface, purely from computer science considerations, can be associated as a certain point sequence of information bits. But, since the location of such points cannot be calculated absolutely accurately, you need to install some pre-designated markers on the disk that help determine the desired location. Creating such marks is called formatting (roughly speaking, dividing the disk into tracks and sectors combined into clusters).

Logical structure and principle of operation of a hard drive in terms of formatting

As for the logical organization of the HDD, formatting comes first here, in which two main types are distinguished: low-level (physical) and high-level (logical). Without these steps, there is no talk of bringing the hard drive into working condition. How to initialize a new hard drive will be discussed separately.

Low-level formatting involves physical impact onto the surface of the HDD, which creates sectors located along the tracks. It is curious that the principle of operation of a hard drive is such that each created sector has its own unique address, which includes the number of the sector itself, the number of the track on which it is located, and the number of the side of the platter. Thus, when organizing direct access, the same RAM accesses directly to a given address, rather than searching for the necessary information across the entire surface, due to which performance is achieved (although this is not the most important thing). Please note that when performing low-level formatting, absolutely all information is erased, and in most cases it cannot be restored.

Another thing is logical formatting (in Windows systems this is quick formatting or Quick format). In addition, these processes are also applicable to the creation of logical partitions, which are a certain area of ​​the main hard drive that operate on the same principles.

Logical formatting primarily affects the system area, which consists of boot sector and partition tables (Boot record), file allocation tables (FAT, NTFS, etc.) and root directory (Root Directory).

Information is written to sectors through the cluster in several parts, and one cluster cannot contain two identical objects (files). Actually, the creation of a logical partition, as it were, separates it from the main system partition, as a result of which the information stored on it is not subject to change or deletion in the event of errors and failures.

Main characteristics of HDD

It seems that in general terms the principle of operation of a hard drive is a little clear. Now let's move on to the main characteristics, which give a complete picture of all the capabilities (or shortcomings) of modern hard drives.

The operating principle of a hard drive and its main characteristics can be completely different. To understand what we are talking about, let’s highlight the most basic parameters that characterize all information storage devices known today:

  • capacity (volume);
  • performance (data access speed, reading and writing information);
  • interface (connection method, controller type).

Capacity represents the total amount of information that can be written and stored on a hard drive. The HDD production industry is developing so quickly that today hard drives with capacities of about 2 TB and higher have come into use. And, as it is believed, this is not the limit.

The interface is the most significant characteristic. It determines exactly how the device is connected to the motherboard, which controller is used, how reading and writing are done, etc. The main and most common interfaces are IDE, SATA and SCSI.

Disks with an IDE interface are inexpensive, but the main disadvantages include a limited number of simultaneously connected devices (maximum four) and no high speed data transfer (and even with support for direct access to Ultra DMA memory or Ultra ATA protocols (Mode 2 and Mode 4). Although it is believed that their use can increase the read/write speed to 16 MB/s, but in reality the speed much lower. In addition, to use the UDMA mode, you need to install a special driver, which, in theory, should be included with the package. motherboard.

When talking about the principle of operation of a hard drive and its characteristics, we cannot ignore which is the successor to the IDE ATA version. The advantage of this technology is that the read/write speed can be increased to 100 MB/s through the use of the high-speed Fireware IEEE-1394 bus.

Finally, the SCSI interface, compared to the previous two, is the most flexible and fastest (write/read speeds reach 160 MB/s and higher). But such hard drives cost almost twice as much. But the number of simultaneously connected information storage devices ranges from seven to fifteen, the connection can be made without turning off the computer, and the cable length can be about 15-30 meters. Actually, this one HDD type For the most part, it is used not on user PCs, but on servers.

Performance, which characterizes the transfer speed and I/O throughput, is usually expressed in terms of transfer time and the amount of sequential data transferred and expressed in MB/s.

Some additional options

Speaking about what the operating principle of a hard drive is and what parameters affect its functioning, we cannot ignore some additional characteristics that may affect the performance or even the lifespan of the device.

Here, the first place is the rotation speed, which directly affects the time of search and initialization (recognition) of the desired sector. This is the so-called latent search time - the interval during which the required sector rotates towards the read head. Today, several standards have been adopted for spindle speed, expressed in revolutions per minute with a delay time in milliseconds:

  • 3600 - 8,33;
  • 4500 - 6,67;
  • 5400 - 5,56;
  • 7200 - 4,17.

It is easy to see that the higher the speed, the less time is spent searching for sectors, and in physical terms, per disk revolution before installing the head desired point plate positioning.

Another parameter is the internal transmission speed. On external tracks it is minimal, but increases with a gradual transition to internal tracks. Thus, the same defragmentation process, which is moving frequently used data to the fastest areas of the disk, is nothing more than moving it to an internal track with a higher read speed. External speed has fixed values ​​and directly depends on the interface used.

Finally, one of the important points is related to the presence of the hard drive's own cache memory or buffer. In fact, the principle of operation of a hard drive in terms of buffer use is somewhat similar to RAM or virtual memory. The larger the cache memory (128-256 KB), the faster the hard drive will work.

Main requirements for HDD

There are not so many basic requirements that are imposed on hard drives in most cases. Main - long term service and reliability.

The main standard for most HDDs is a service life of about 5-7 years with an operating time of at least five hundred thousand hours, but for high-end hard drives this figure is at least a million hours.

As for reliability, the S.M.A.R.T. self-testing function is responsible for this, which monitors the condition of individual elements of the hard drive, carrying out constant monitoring. Based on the collected data, even a certain forecast of the appearance of possible malfunctions further.

It goes without saying that the user should not remain on the sidelines. So, for example, when working with a HDD, it is extremely important to maintain the optimal temperature regime (0 - 50 ± 10 degrees Celsius), avoid shakes, impacts and falls of the hard drive, dust or other small particles getting into it, etc. By the way, many will It is interesting to know that the same particles of tobacco smoke are approximately twice the distance between the read head and the magnetic surface of the hard drive, and human hair - 5-10 times.

Initialization issues in the system when replacing a hard drive

Now a few words about what actions need to be taken if for some reason the user changed the hard drive or installed an additional one.

We will not fully describe this process, but will focus only on the main stages. First, you need to connect the hard drive and look in the BIOS settings to see if new hardware has been detected, initialize it in the disk administration section and create a boot record, create a simple volume, assign it an identifier (letter) and format it by selecting a file system. Only after this the new “screw” will be completely ready for work.

Conclusion

That, in fact, is all that briefly concerns the basic functioning and characteristics of modern hard drives. The principle of operation of an external hard drive was not fundamentally considered here, since it is practically no different from what is used for stationary HDDs. The only difference is the method of connecting the additional drive to a computer or laptop. The most common connection is via a USB interface, which is directly connected to the motherboard. At the same time, if you want to ensure maximum performance, it is better to use the USB 3.0 standard (the port inside is painted blue), of course, provided that the external HDD supports him.

Otherwise, I think many people have at least a little bit of understanding of how a hard drive of any type functions. Perhaps too much has been given above, especially even from school course physicists, however, without this it will not be possible to fully understand all the basic principles and methods inherent in the technologies for the production and use of HDDs.

Hard drive magnetic disk(HDD) \ HDD (Hard Disk Drive) \ hard drive (media) – a material object capable of storing information.

Information storage devices can be classified according to the following criteria:

  • method of storing information: magnetoelectric, optical, magneto-optical;
  • type of storage medium: drives on floppy and hard magnetic disks, optical and magneto-optical disks, magnetic tape, solid-state memory elements;
  • the method of organizing access to information - direct, sequential and block access drives;
  • type of information storage device - embedded (internal), external, stand-alone, mobile (wearable), etc.


A significant part of the information storage devices currently in use is based on magnetic media.

Hard drive device

The hard drive contains a set of plates, most often representing metal disks, coated with a magnetic material - platter (gamma ferrite oxide, barium ferrite, chromium oxide...) and connected to each other using a spindle (shaft, axis).
The discs themselves (approximately 2 mm thick) are made of aluminum, brass, ceramics or glass. (see pic)

Both surfaces of the discs are used for recording. Used 4-9 plates. The shaft rotates at a high constant speed (3600-7200 rpm)
Rotation of disks and radical movement of heads is carried out using 2 electric motors.
Data is written or read using write/read heads one for each surface of the disk. The number of heads is equal to the number of working surfaces of all disks.

Information is written to the disk in strictly defined places - concentric tracks (tracks) . The tracks are divided into sectors. One sector contains 512 bytes of information.

Data exchange between RAM and NMD is carried out sequentially by an integer (cluster). Cluster- chains of consecutive sectors (1,2,3,4,...)

Special engine using a bracket, positions the read/write head above a given track (moves it in the radial direction).
When the disk is rotated, the head is located above the desired sector. Obviously, all heads move simultaneously and read information; data heads move simultaneously and read information from identical tracks on different drives.

Hard drive tracks with the same serial number on different hard drive drives are called cylinder .
The read-write heads move along the surface of the platter. The closer the head is to the surface of the disk without touching it, the higher the permissible recording density.

Hard drive device


Magnetic principle of reading and writing information

Magnetic information recording principle

The physical foundations of the processes of recording and reproducing information on magnetic media are laid in the works of physicists M. Faraday (1791 - 1867) and D. C. Maxwell (1831 - 1879).

In magnetic storage media, digital recording is made on magnetically sensitive material. Such materials include some varieties of iron oxides, nickel, cobalt and its compounds, alloys, as well as magnetoplasts and magnetoelastas with viscous plastics and rubber, micropowder magnetic materials.

The magnetic coating is several micrometers thick. The coating is applied to a non-magnetic base, which is made from plastics for magnetic tapes and floppy disks, and aluminum alloys and composite substrate materials are used for hard disks. The magnetic coating of the disk has a domain structure, i.e. consists of many magnetized tiny particles.

Magnetic domain (from Latin dominium - possession) is a microscopic, uniformly magnetized region in ferromagnetic samples, separated from neighboring regions by thin transition layers (domain boundaries).

Under the influence of an external magnetic field, the own magnetic fields domains are oriented in accordance with the direction of magnetic power lines. After the influence of the external field ceases, zones of residual magnetization are formed on the surface of the domain. Thanks to this property, information is stored on a magnetic medium in the presence of a magnetic field.

When recording information, an external magnetic field is created using a magnetic head. In the process of reading information, the zones of residual magnetization, located opposite the magnetic head, induce an electromotive force (EMF) in it during reading.

The scheme for writing and reading from a magnetic disk is shown in Fig. 3.1 The change in the direction of the EMF over a certain period of time is identified with binary unit, and the absence of this change - with zero. The specified period of time is called bit element.

The surface of a magnetic medium is considered as a sequence of point positions, each of which is associated with a bit of information. Since the location of these positions is not precisely determined, recording requires pre-applied marks to help locate the required recording positions. To apply such synchronization marks, the disk must be divided into tracks
and sectors - formatting

Organizing quick access to information on disk is an important stage in data storage. Online access to any part of the disk surface is ensured, firstly, by giving it rapid rotation and, secondly, by moving the magnetic read/write head along the radius of the disk.
A floppy disk rotates at a speed of 300-360 rpm, and a hard disk rotates at 3600-7200 rpm.


Hard drive logical device

The magnetic disk is not initially ready for use. To bring it into working condition it must be formatted, i.e. the disk structure must be created.

The structure (layout) of the disk is created during the formatting process.

Formatting magnetic disks includes 2 stages:

  1. physical formatting (low level)
  2. logical (high level).

When physically formatting, the working surface of the disk is divided into separate areas called sectors, which are located along concentric circles - paths.

In addition, sectors that are unsuitable for recording data are determined and marked as bad in order to avoid their use. Each sector is minimum unit data on the disk has its own address to provide direct access to it. The sector address includes the disc side number, track number, and sector number on the track. The physical parameters of the disk are set.

As a rule, the user does not need to deal with physical formatting, since in most cases hard drives arrive formatted. Generally speaking, this should be handled by a specialized service center.

Low Level Formatting must be done in the following cases:

  • if there is a failure in track zero, causing problems when booting from a hard disk, but the disk itself is accessible when booting from a floppy disk;
  • if you return to working condition old disk, for example, rearranged from a broken computer.
  • if the disk is formatted to work with another operating system;
  • if the disk has stopped working normally and all recovery methods have not yielded positive results.

Please keep in mind that physical formatting is a very powerful operation— when it is executed, the data stored on the disk will be completely erased and it will be completely impossible to restore it! Therefore, do not proceed with low-level formatting unless you are confident that you have saved all important data off the hard drive!

After you perform low-level formatting, the next step is to create a partition of the hard drive into one or more logical drives - the best way to deal with the confusion of directories and files scattered across the disk.

Without adding any hardware elements to your system, you get the opportunity to work with several parts of one hard drive, like multiple drives.
This does not increase the disk capacity, but its organization can be significantly improved. In addition, different logical drives can be used for different operating systems.

At logical formatting The media is finally prepared for data storage through the logical organization of disk space.
The disk is prepared to write files to sectors created by low-level formatting.
After creating the disk partition table, the next stage follows - the logical formatting of individual parts of the partition, hereinafter referred to as logical disks.

Logical drive - This is some area of ​​​​the hard drive that works in the same way as a separate drive.

Logical formatting is a much simpler process than low-level formatting.
To run it, boot from the floppy disk containing the FORMAT utility.
If you have several logical drives, format them all one by one.

During the logical formatting process, the disk is allocated system area, which consists of 3 parts:

  • boot sector and partition table (Boot record)
  • File Allocation Tables (FAT), in which the numbers of tracks and sectors storing files are recorded
  • root directory (Root Directory).

Information is recorded in parts through the cluster. There cannot be 2 different files in the same cluster.
In addition, on at this stage the disk can be given a name.

A hard drive can be divided into several logical drives and, conversely, 2 hard drives can be combined into one logical drive.

It is recommended to create at least two partitions (two logical drives) on your hard drive: one of them is allocated for the operating system and software, the second drive is exclusively allocated for user data. This way, data and system files are stored separately from each other, and in the event of an operating system failure, there is a much greater chance that user data will be saved.


Characteristics of hard drives

Hard drives (hard drives) differ from each other in the following characteristics:

  1. capacity
  2. performance – data access time, speed of reading and writing information.
  3. interface (connection method) - the type of controller to which the hard drive should be connected (most often IDE/EIDE and various SCSI options).
  4. other features

1. Capacity— the amount of information that fits on the disk (determined by the level of manufacturing technology).
Today the capacity is 500 -2000 or more GB. You can never have enough hard drive space.


2. Speed ​​of operation (performance)
disk is characterized by two indicators: disk access time And disk read/write speed.

Access time – the time required to move (position) the read/write heads to the desired track and the desired sector.
The average typical access time between two randomly selected tracks is approximately 8-12ms (milliseconds), faster disks have a time of 5-7ms.
The transition time to the adjacent track (adjacent cylinder) is less than 0.5 - 1.5 ms. It also takes time to turn to the desired sector.
The total disk rotation time for today's hard drives is 8 - 16ms, the average sector waiting time is 3-8ms.
The shorter the access time, the faster the disk will operate.

Read/write speed(input/output bandwidth) or data transfer rate (transfer)– the transfer time of sequential data depends not only on the disk, but also on its controller, bus types, and processor speed. The speed of slow disks is 1.5-3 MB/s, for fast ones 4-5 MB/s, for the latest ones 20 MB/s.
Hard drives with SCSI interface support a rotation speed of 10,000 rpm. and average search time 5ms, data transfer speed 40-80 Mb/s.


3.Hard drive interface standard
- i.e. the type of controller to which the hard drive should be connected. It is located on the motherboard.
There are three main connection interfaces

  1. IDE and its various variants


IDE (Integrated Disk Electronic) or (ATA) Advance Technology Attachment

Advantages: simplicity and low cost

Transfer speed: 8.3, 16.7, 33.3, 66.6, 100 Mb/s. As data develops, the interface supports expanding the list of devices: hard drive, super floppy, magneto-optics,
NML, CD-ROM, CD-R, DVD-ROM, LS-120, ZIP.

Some elements of parallelization (gneuing and disconnect/reconnect) and monitoring the integrity of data during transmission are introduced. The main disadvantage of the IDE is the small number of connected devices (no more than 4), which is clearly not enough for a high-end PC.
Today, IDE interfaces have switched to new Ultra ATA exchange protocols. Significantly increasing your throughput
Mode 4 and DMA (Direct Memory Access) Mode 2 allows data transfer at a speed of 16.6 MB / s, but the actual data transfer speed would be much lower.
Standards Ultra DMA/33 and Ultra DMA/66, developed in February 1998. by Quantum have 3 operating modes 0,1,2 and 4, respectively, in the second mode the carrier supports
transfer speed 33Mb/s. (Ultra DMA/33 Mode 2) To ensure such a high speed can only be achieved when exchanging with the drive buffer. In order to take advantage
Ultra DMA standards require that 2 conditions be met:

1. hardware support on the motherboard (chipset) and on the drive itself.

2. to support Ultra DMA mode, like other DMA (direct memory Access).

Requires a special driver for different chipsets. As a rule, they are included with the motherboard; if necessary, it can be “downloaded”
from the Internet from the website of the motherboard manufacturer.

The Ultra DMA standard is backward compatible with previous controllers operating in a slower version.
Today's version: Ultra DMA/100 (late 2000) and Ultra DMA/133 (2001).

SATA
Replacement IDE (ATA) not other High Speed ​​Serial Bus Fireware (IEEE-1394). Application new technology will allow you to increase the transfer speed to 100Mb/s,
The reliability of the system is increased, this will allow you to install devices without turning on the PC, which is strictly prohibited in the ATA interface.


SCSI (Small Computer System Interface)
— devices are 2 times more expensive than regular ones and require a special controller on the motherboard.
Used for servers, publishing systems, CAD. Provide higher performance (speed up to 160Mb/s), wide range connected storage devices.
The SCSI controller must be purchased together with the corresponding disk.

SCSI has an advantage over IDE - flexibility and performance.
Flexibility lies in the large number of connected devices (7-15), and for IDE (4 maximum), a longer cable length.
Performance – high transfer speed and the ability to simultaneously process multiple transactions.

1. Ultra Sсsi 2/3 (Fast-20) up to 40 Mb/s 16-bit version Ultra2 - SCSI standard up to 80 Mb/s

2. Another SCSI interface technology called Fiber Channel Arbitrated Loop (FC-AL) allows you to connect up to 100 Mbps, with a cable length of up to 30 meters. FC-AL technology allows for “hot” connections, i.e. on the go, has additional lines for monitoring and error correction (the technology is more expensive than regular SCSI).

4. Other features of modern hard drives

The huge variety of hard drive models makes it difficult to choose the right one.
In addition to the required capacity, performance is also very important, which is determined mainly by its physical characteristics.
Such characteristics are the average search time, rotation speed, internal and external transfer speed, and cache memory size.

4.1 Average search time.

The hard drive takes some time to move the magnetic head from its current position to the new one required to read the next piece of information.
In each specific situation, this time is different, depending on the distance the head must move. Typically, specifications provide only averaged values, and the averaging algorithms used by different companies generally differ, so direct comparison is difficult.

Thus, Fujitsu and Western Digital companies use all possible pairs of tracks; Maxtor and Quantum companies use the random access method. The resulting result can be further adjusted.

The search time for writing is often slightly higher than for reading. Some manufacturers provide only the lower value (for reading) in their specifications. In any case, in addition to the average values, it is useful to take into account the maximum (across the entire disk),
and minimum (i.e., track-to-track) search time.

4.2 Rotation speed

From the point of view of the speed of access to the desired fragment of the recording, the rotation speed affects the amount of the so-called latent time, which is required for the disk to rotate to the magnetic head with the desired sector.

The average value of this time corresponds to half a disk revolution and is 8.33 ms at 3600 rpm, 6.67 ms at 4500 rpm, 5.56 ms at 5400 rpm, 4.17 ms at 7200 rpm.

The value of latent time is comparable to average seek time, so in some modes it can have the same, if not greater, impact on performance.

4.3 Internal baud rate

— the speed at which data is written to or read from the disk. Due to zone recording, it has a variable value - higher on the outer tracks and lower on the inner ones.
When working with long files, in many cases this parameter limits the transfer speed.

4.4 External baud rate

— speed (peak) with which data is transmitted through the interface.

It depends on the interface type and most often has fixed values: 8.3; 11.1; 16.7Mb/s for Enhanced IDE (PIO Mode2, 3, 4); 33.3 66.6 100 for Ultra DMA; 5, 10, 20, 40, 80, 160 Mb/s for synchronous SCSI, Fast SCSI-2, FastWide SCSI-2 Ultra SCSI (16 bits), respectively.

4.5 Whether the hard drive has its own Cache memory and its volume (disk buffer).

The size and organization of Cache memory (internal buffer) can significantly affect the performance of the hard drive. Same as for regular cache memory,
Once a certain volume is reached, productivity growth slows down sharply.

Large-capacity segmented cache memory is relevant for high-performance SCSI drives used in multitasking environments. The larger the cache, the faster the hard drive works (128-256Kb).

The influence of each parameter on overall performance quite difficult to isolate.


Hard drive requirements

The main requirement for disks is reliability of operation, guaranteed by a long service life of components of 5-7 years; good statistical indicators, namely:

  • mean time between failures of at least 500 thousand hours (highest class 1 million hours or more.)
  • embedded system active control the state of the disk nodes SMART/Self Monitoring Analysis and Report Technology.

Technology S.M.A.R.T. (Self-Monitoring Analysis and Reporting Technology) is an open industry standard developed at one time by Compaq, IBM and a number of other hard drive manufacturers.

The point of this technology is the internal self-diagnosis of the hard drive, which allows you to assess its current condition and inform you about possible future problems that could lead to data loss or failure of the drive.

The condition of all vital disk elements is constantly monitored:
heads, working surfaces, electric motor with spindle, electronics unit. For example, if a signal weakening is detected, the information is rewritten and further observation occurs.
If the signal weakens again, the data is transferred to another location, and the given cluster is placed as defective and unavailable, and another cluster from the disk reserve is made available in its place.

When working with hard drive The temperature conditions in which the drive operates must be observed. Manufacturers guarantee trouble-free operation hard drive at ambient temperatures ranging from 0C to 50C, although, in principle, without serious consequences, you can change the boundaries by at least 10 degrees in both directions.
With large temperature deviations, an air layer of the required thickness may not be formed, which will lead to damage to the magnetic layer.

In general, HDD manufacturers pay quite a lot of attention to the reliability of their products.

The main problem is foreign particles getting inside the disk.

For comparison: a particle of tobacco smoke is twice the distance between the surface and the head, the thickness of a human hair is 5-10 times greater.
For the head, an encounter with such objects will result in a strong blow and, as a result, partial damage or complete failure.
Outwardly it is noticeable as the appearance large quantity regularly located unusable clusters.

Short-term, large accelerations (overloads) that occur during impacts, falls, etc. are dangerous. For example, from an impact the head sharply hits the magnetic
layer and causes its destruction in the corresponding place. Or, conversely, it first moves in the opposite direction, and then, under the influence of elastic force, it hits the surface like a spring.
As a result, particles of magnetic coating appear in the housing, which again can damage the head.

You should not think that under the influence of centrifugal force they will fly away from the disk - the magnetic layer
will firmly attract them to you. In principle, the terrible consequences are not the impact itself (you can somehow come to terms with the loss of a certain number of clusters), but the fact that particles are formed that will certainly cause further damage to the disk.

To prevent such very unpleasant cases, various companies resort to all sorts of tricks. In addition to simply increasing the mechanical strength of the disk components, intelligent S.M.A.R.T. technology is also used, which monitors the reliability of recording and the safety of data on the media (see above).

In fact, the disk is always not formatted to its full capacity; there is some reserve. This is mainly due to the fact that it is almost impossible to produce a carrier
on which absolutely the entire surface would be of high quality, there will definitely be bad clusters (failures). When a disk is low-level formatted, its electronics are configured so that
so she can bypass these faulty areas, and it was completely invisible to the user that the media was defective. But if they are visible (for example, after formatting
the utility displays their number other than zero), then this is already very bad.

If the warranty has not expired (and, in my opinion, it is best to buy a HDD with a warranty), then immediately take the disk to the seller and demand a replacement of the media or a refund.
The seller, of course, will immediately begin to say that a couple of faulty areas are not a reason for concern, but do not believe him. As already mentioned, this couple will most likely cause many more, and subsequently the complete failure of the hard drive is possible.

A disk in working condition is especially sensitive to damage, so you should not place the computer in a place where it may be subject to various shocks, vibrations, and so on.


Preparing the hard drive for work

Let's start from the very beginning. Let's assume that you bought a hard disk drive and a cable for it separately from the computer.
(The fact is that when you buy an assembled computer, you will receive a disk ready for use).

A few words about handling it. A hard disk drive is a very complex product that contains, in addition to electronics, precision mechanics.
Therefore, it requires careful handling - shocks, falls and strong vibration can damage its mechanical part. As a rule, the drive board contains many small-sized elements and is not covered with durable covers. For this reason, care should be taken to ensure its safety.
The first thing you should do when you receive a hard drive is to read the documentation that came with it - it will probably contain a lot of useful and interesting information. In this case, you should pay attention to the following points:

  • the presence and options for installing jumpers that determine the setting (installation) of the disk, for example, determining such a parameter as physical name disk (they may exist, but they may not exist),
  • number of heads, cylinders, sectors on disks, precompensation level, and disk type. You must enter this information when prompted by the computer setup program.
    All this information will be needed when formatting the disk and preparing the machine to work with it.
  • If the PC itself does not detect the parameters of your hard drive, the bigger problem will be installing a drive for which there is no documentation.
    On most hard drives you can find labels with the name of the manufacturer, the type (brand) of the device, as well as a table of tracks that are not allowed for use.
    In addition, the drive can contain information about the number of heads, cylinders and sectors and the level of precompensation.

To be fair, it must be said that often only its title is written on the disc. But even in this case, you can find the required information either in the reference book,
or by calling the company's representative office. It is important to get answers to three questions:

  • How should the jumpers be set in order to use the drive as master\slave?
  • How many cylinders and heads are there on the disk, how many sectors per track, what is the precompensation value?
  • what type of disk is recorded in ROM BIOS is better all corresponds to this drive?

With this information in hand, you can proceed to installing the hard drive.


To install a hard drive in your computer, do the following:

  1. Disable completely system unit from the power supply, remove the cover.
  2. Connect the hard drive cable to the motherboard controller. If you are installing a second disk, you can use the cable from the first one if it has an additional connector, but you need to remember that the operating speed of different hard drives will be compared to the slower side.
  3. If necessary, change the jumpers according to the way you are using the hard drive.
  4. Install the drive in a free space and connect the cable from the controller on the board to the hard drive connector with the red stripe to the power supply, power supply cable.
  5. Securely secure the hard drive with four bolts on both sides, arrange the cables inside the computer in order so that when closing the cover you do not cut them,
  6. Close the system unit.
  7. If the PC itself does not detect the hard drive, then change the computer configuration using Setup so that the computer knows that a new device has been added to it.


Hard drive manufacturers

Hard drives of the same capacity (but from different manufacturers) usually have more or less similar characteristics, and the differences are expressed mainly in the case design, form factor (in other words, dimensions) and warranty period. Moreover, special mention should be made about the latter: the cost of information on a modern hard drive is often many times higher than its own price.

If your disk has problems, trying to repair it often only means exposing your data to additional risk.
A much more reasonable way is to replace the faulty device with a new one.
The lion's share of hard drives on the Russian (and not only) market is made up of products from IBM, Maxtor, Fujitsu, Western Digital (WD), Seagate, Quantum.

the name of the manufacturer producing this type of drive,

Corporation Quantum (www. quantum. com.), founded in 1980, is one of the veterans in the disk drive market. The company is known for its innovative technical solutions aimed at improving the reliability and performance of hard drives, data access time on the disk and read/write speed on the disk, and the ability to inform about possible future problems that could lead to data loss or disk failure.

— One of Quantum’s proprietary technologies is SPS (Shock Protection System), designed to protect the disk from shock.

- built-in DPS (Data Protection System) program, designed to preserve the most valuable thing - the data stored on them.

Corporation Western Digital (www.wdс.com.) Also one of the oldest disk drive manufacturing companies, it has seen its ups and downs in its history.
The company has recently been able to introduce the latest technologies into its disks. Among them, it is worth noting our own development - Data Lifeguard technology, which is further development S.M.A.R.T systems It attempts to logically complete the chain.

According to this technology, the disk surface is regularly scanned during periods when it is not used by the system. This reads the data and checks its integrity. If problems are noted while accessing a sector, the data is transferred to another sector.
Information about bad sectors is entered into an internal defect list, which avoids future entries into bad sectors in the future.

Firm Seagate (www.seagate.com) very famous in our market. By the way, I recommend hard drives from this particular company as they are very reliable and durable.

In 1998, she brought attention to herself again by releasing a series of Medallist Pro discs
with a rotation speed of 7200 rpm, using special bearings for this. Previously, this speed was used only in SCSI interface drives, which made it possible to increase performance. The same series uses SeaShield System technology, designed to improve the protection of the disk and the data stored on it from the influence of electrostatics and shock. At the same time, the impact of electromagnetic radiation is also reduced.

All manufactured discs support S.M.A.R.T technology.
In new Seagate drives envisages the use of an improved version of its SeaShield system with greater capabilities.
It is significant that Seagate announced the highest shock resistance of the updated series in the industry - 300G when not in use.

Firm IBM (www. storage. ibm. com) Although it was not a major supplier on the Russian hard drive market until recently, it managed to quickly gain a good reputation thanks to its fast and reliable disk drives.

Firm Fujitsu (www.fujitsu.com) is a large and experienced manufacturer of disk drives, not only magnetic, but also optical and magneto-optical.
True, the company is by no means a leader in the market of hard drives with an IDE interface: it controls (according to various studies) approximately 4% of this market, and its main interests lie in the field of SCSI devices.


Terminological dictionary

Since some drive elements that play an important role in its operation are often thought of as abstract concepts, the most important terms are explained below.

Access time— The period of time required for a hard disk drive to search for and transfer data to or from memory.
The performance of hard disk drives is often determined by access (fetch) time.

Cluster- the smallest unit of space that the OS works with in the file location table. Typically a cluster consists of 2-4-8 or more sectors.
The number of sectors depends on the type of disk. Searching for clusters instead of individual sectors reduces OS time costs. Large clusters provide faster performance
drive, since the number of clusters in this case is smaller, but the space (space) on the disk is used worse, since many files may be smaller than the cluster and the remaining bytes of the cluster are not used.


Controller (Сontroller)
- circuitry, usually located on an expansion card, that controls the operation of the hard disk drive, including moving the head and reading and writing data.


Cylinder
- tracks located opposite each other on all sides of all disks.

Drive head- a mechanism that moves along the surface of the hard drive and provides electromagnetic recording or reading of data.


File Allocation Table (FAT)
- a record generated by the OS that tracks the placement of each file on the disk and which sectors are used and which are free for writing new data to them.


Head gap
— the distance between the drive head and the disk surface.


Interleave
— the relationship between the disk rotation speed and the organization of sectors on the disk. Typically, the rotation speed of the disk exceeds the computer's ability to receive data from the disk. By the time the controller reads the data, the next sequential sector has already passed the head. Therefore, data is written to the disk through one or two sectors. Using a special software When formatting a disk, you can change the striping order.


Logical drive
- certain parts of the working surface of the hard drive, which are considered as separate drives.
Some logical drives can be used for other operating systems, such as UNIX.


Parking
- moving the drive heads to a specific point and fixing them stationary above unused parts of the disk, in order to minimize damage when the drive is shaken when the heads hit the surface of the disk.


Partitioning
– operation of dividing a hard disk into logical drives. All disks crash though small disks can only have one section.


Disk (Platter)
- the metal disk itself, coated with magnetic material, on which data is recorded. A hard drive usually has more than one disk.


RLL (Run-length-limited)
- an encoding scheme used by some controllers to increase the number of sectors per track to accommodate more data.


Sector
- A disk track division that represents the basic unit of size used by the drive. OS sectors typically contain 512 bytes.


Positioning time (Seek time)
- the time required for the head to move from the track on which it is installed to some other desired track.


Track
- concentric division of the disk. The tracks are similar to the tracks on a record. Unlike the tracks on a record, which are a continuous spiral, the tracks on a disc are circular. The tracks are in turn divided into clusters and sectors.


Track-to-track seek time
— the time required for the drive head to move to the adjacent track.


Transfer rate
- the amount of information transferred between the disk and the computer per unit of time. It also includes the time it takes to search for a track.

Today, many believe that magnetic hard drives are too slow, unreliable and technically outdated. At the same time, solid-state drives, on the contrary, are at the peak of their glory: in every mobile device Flash storage media is available, and even desktop PCs use flash drives. However, their prospects are very limited. According to CHIP's forecast, SSDs will fall in price a little more, data density and therefore drive capacity will likely double, and then the end will come. 1TB SSDs will always be too expensive. Against their background, hard magnetic drives of similar capacity look very attractive, so it’s too early to talk about the end of the era of traditional drives. However, today they stand at a crossroads. The potential of the current technology - the perpendicular recording method - allows for two more year cycles, during which new models with increased capacity will be released, and then the limit will be reached.

If the three major manufacturers - Seagate, Western Digital and Toshiba - can make the transition to one of the new technologies presented in this article, then 3.5-inch hard drives with capacities of 60 TB or higher (which is 20 times larger than current models) will cease to be an unattainable luxury. At the same time, the reading speed will also increase, reaching the level of an SSD, since it depends directly on the density of the data being written: the shorter the distance that the reading head needs to travel, the faster the disk operates. Therefore, if our “information hunger” continues to grow, all the “laurels” will go to hard magnetic disks.

Perpendicular recording method

For some time now, hard drives have used the perpendicular recording method (on vertically located domains), which provides more high density data. It is currently the norm. Subsequent technologies will retain this method.

6 TB: the limit is almost reached

Within two years, perpendicular-write disks will reach the limit of data density on a platter.

In modern hard drives with a capacity of up to 4 TB, the recording density of magnetic platters does not exceed 740 Gbit per square inch. Manufacturers promise that drives using the perpendicular recording method will be able to provide 1 Tbit per square inch. It will be out in two years last generation similar drives: the capacity of 3.5-inch form factor models will reach 6 TB, and 2.5-inch models will be able to provide just over 2 TB of disk space. However, such modest rates of growth in recording density can no longer keep up with our ever-increasing hunger for information, as the following graphs demonstrate.

The problem of choosing materials

Hard drives with a perpendicular recording method are not able to meet the growing needs in the field of data storage, since with a recording density of just over 1 Tbit per square inch they are forced to deal with the effect of superparamagnetism. This term means that particles of a certain size of magnetic materials are not able to maintain a state of magnetization for a long time, which can suddenly change under the influence of heat from the environment. At what particle size does this effect, depends on the material used (see table below). The platters of modern HDDs with perpendicular recording are made of an alloy of cobalt, chromium and platinum (CoCrPt), the particles of which have a diameter of 8 nm and a length of 16 nm. To record one bit, the head needs to magnetize about 20 such particles. With a diameter of 6 nm or less, particles of this alloy are not able to reliably maintain the state of their magnetic field.

In the hard drive industry, people often talk about the "trilemma." Manufacturers can use three main methods to increase recording density: changing the particle size, the number of particles, and the type of alloy they are composed of. But when the particle size of the CoCrPt alloy is from 6 nm, the use of one of the methods will lead to the fact that the other two will be useless: if the particle size is reduced, they will lose their magnetization. If you reduce their number per bit, their signal will “dissolve” in the ambient noise of neighboring bits. The read head will not be able to determine whether it is dealing with a "0" or a "1". Alloy with higher magnetic characteristics allows the use of smaller particles and also allows for a reduction in their number, but in this case the recording head is not able to change their magnetization. This trilemma can only be resolved if manufacturers abandon the perpendicular recording method. There are already several technologies ready for this.

Up to 60 TB: new recording technologies

The recording density of future HDDs can be increased tenfold - using microwaves, lasers, SSD controllers and new alloys.

The most promising development, capable of providing a recording density of over 1 Tbit per square inch, is the technology of magnetic recording with partial overlap of tracks (Shingled Magnetic Recording, SMR). Its principle is that the magnetic tracks of an SMR disk partially overlap each other, like tiles on a roof. This technology overcomes the inherent difficulty of the perpendicular recording method: further reduction in the width of the tracks will inevitably lead to the impossibility of recording data. Modern drives have separate tracks with a width of 50 to 30 nm. The minimum possible track width for perpendicular recording is 25 nm. In SMR technology, due to partial overlap, the track width for the read head can be up to 10 nm, which corresponds to a recording density of 2.5 Tbits per square inch. The trick is to increase the width of the recording tracks to 70 nm, while ensuring that the edge of the track is 100% magnetizable. The edge of the track will not change if the next one is recorded with an offset of 10 nm. In addition, the recording head is equipped with a protective shield to prevent its powerful magnetic field from damaging the data located underneath it. As for the head, it has already been designed
by Hitachi. However, there is another problem: usually a direct separate rewriting of bits is carried out on a magnetic disk, and within the framework of SMR technology this is only possible on the topmost track of the platter. Changing bits located on the bottom track would require re-writing the entire platter, which reduces performance.

Promising successor: HAMR

Meanwhile, the international organization for disk drives, materials and equipment IDEMA gives preference to heat-assisted magnetic recording (HAMR, Heat Assisted Magnetic Recording) and considers it as the most likely candidate for the role of successor to perpendicular recording technology. Mark Guinen from IDEMA's board of directors predicts that the first HAMR discs will be available for sale in 2015.
Unlike SMR, HAMR technology solves the trilemma by reducing the magnetic particles, which requires switching to a new material. For HAMR disks, it is necessary to use a material with a higher anisotropic energy - the most promising is an alloy of iron and platinum (FePt). Anisotropy determines how much energy is required to remove magnetization from a material. In FePt it is so high that only particles 2.5 nm in size encounter the superparamagnetic limit (see table in next section). This circumstance would make it possible to produce hard drives with a capacity of 30 TB with a recording density of 5 TB per square inch.

The problem is that the recording head itself is not able to change the magnetic orientation of the FePt alloy particles. Therefore, in HAMR disks, a laser is built into it, which momentarily heats the particles over an area of ​​several nanometers to a temperature of approximately 400 ° C. As a result, the recording head requires less energy to change the magnetic field of the particles. Based on the recording density values, thermally assisted magnetic recording drives can have high read speeds (about 400–500 MB/s), which today is only achievable for SSD drives with a SATA 3 interface.

In addition to the laser, a spin torque generator (Spin Torque Oscillator), emitting microwaves, is also capable of providing the ability to write on FePt alloy plates. Microwaves change the magnetic field characteristics of the particles in such a way that a weak recording head easily remagnetizes them. Overall, the generator increases the efficiency of the recording head by three times. Microwave Assisted Magnetic Recording (MAMR) technology, unlike HAMR, is still under development.

New metal alloy for disks with thermally massaged magnetic recording

The FePt alloy in the HAMR disk has a higher anisotropic energy and increased magnetization ability. Compared to the perpendicular recording method, smaller particle sizes can be used here.

What comes after HAMR?

Bit-Patterned Media (BPM) technology has long been considered the most promising. It provides a different solution to the trilemma: in this case, the magnetic particles are separated from each other by an insulating layer of silicon oxide. Unlike traditional magnetic disks, the magnetizable areas are deposited using lithography, similar to chip manufacturing. This makes BPM media quite expensive to produce. BPM allows you to reduce the number of particles per bit while avoiding the influence of noise from neighboring particles on the signal. The only problem today is creating a read/write head that could provide high accuracy BPM bit control. Therefore, BPM is currently seen as the most likely successor to HAMR. If you combine both technologies, you can achieve a recording density of 10 terabits per square inch and produce disks with a capacity of 60 terabytes.

A new area of ​​research is Two Dimensional Magnetic Recording (TDMR) technology, which solves the trilemma by eliminating the signal-to-noise ratio problem. With a small number of particles per bit, the read head receives an unclear signal because it has low power and is lost in the noise of neighboring particles. A special feature of TDMR technology is the ability to restore a lost signal. This requires multiple read head prints or a print of multiple read heads that form a 2D image of the surface. Based on these images, the decoder recovers the corresponding bits.

The principle of operation of hard drives is similar to the operation of tape recorders. The working surface of the disk moves relative to the read head (for example, in the form of an inductor with a gap in the magnetic circuit). When applying AC electric current(during recording) to the head coil, the emerging alternating magnetic field from the head gap affects the ferromagnet of the disk surface and changes the direction of the domain magnetization vector depending on the signal strength. During reading, the movement of domains at the head gap leads to a change in the magnetic flux in the head magnetic circuit, which leads to the appearance of an alternating electrical signal in the coil due to the effect of electromagnetic induction.

Recently, the magnetoresistive effect has been used for reading and magnetoresistive heads are used in disks. In them, a change in the magnetic field leads to a change in resistance, depending on the change in the magnetic field strength. Such heads make it possible to increase the likelihood of reliable information reading (especially at high information recording densities).

Parallel recording method
Bits of information are recorded using a small head, which, passing over the surface of a rotating disk, magnetizes billions of horizontal discrete areas - domains. Each of these regions is a logical zero or one, depending on the magnetization.

Maximum achievable when using this method The recording density is about 23 Gbit/cm². Currently, this method is gradually being replaced by the perpendicular recording method.

Perpendicular recording method
Perpendicular recording method is a technology in which bits of information are stored in vertical domains. This allows the use of stronger magnetic fields and reduces the area of ​​material required to write 1 bit. The recording density of modern samples is 60 Gbit/cm². Perpendicular recording hard drives have been available on the market since 2005.

Thermal magnetic recording method
Thermal magnetic recording method Heat-assisted magnetic recording, HAMR) is currently the most promising of the existing ones; it is currently being actively developed. This method uses spot heating of the disc, which allows the head to magnetize very small areas of its surface. Once the disk is cooled, the magnetization is “fixed.” Hard drives of this type have not yet been presented on the market (as of 2009); there are only experimental samples with a recording density of 150 Gbit/cm². The development of HAMR technologies has been going on for quite some time, but experts still differ in estimates of the maximum recording density. Thus, Hitachi names the limit at 2.3-3.1 Tbit/cm², and representatives of Seagate Technology suggest that they will be able to increase the recording density of HAMR media to 7.75 Tbit/cm². Widespread use of this technology should be expected in 2011-2012.