What is a file system. File systems. File system structure

One of the OS components is the file system - the main storage of system and user information. All modern operating systems work with one or more file systems, for example, FAT (File Allocation Table), NTFS (NT File System), HPFS (High Performance File System), NFS (Network File System), AFS (Andrew File System), Internet File System.

The file system is a part of the operating system, the purpose of which is to provide the user with a convenient interface when working with data stored in external memory and to allow files to be shared among multiple users and processes.

In a broad sense, the concept of "file system" includes:

The collection of all files on the disk;

Sets of data structures used to manage files, such as file directories, file descriptors, free and used disk space allocation tables;

A set of system software tools that implement file management, in particular: creation, destruction, reading, writing, naming, searching and other operations on files.

The file system is usually used both when loading the OS after turning on the computer, and during operation. The file system performs the following main functions:

Determines possible ways to organize files and file structure on the media;

Implements methods for accessing file contents and provides tools for working with files and file structure. In this case, access to data can be organized by the file system both by name and by address (number of sector, surface and track of the media);

Monitors free space on storage media.

When an application program accesses a file, it has no idea how the information in a particular file is located, nor what type of physical media (CD, hard disk, or flash memory unit) it is stored on. All the program knows is the file name, its size and attributes. It receives this data from the file system driver. It is the file system that determines where and how the file will be written on physical media (for example, a hard drive).

From the operating system's point of view, the entire disk is a set of clusters (memory areas) ranging in size from 512 bytes or larger. File system drivers organize clusters into files and directories (which are actually files containing a list of files in that directory). These same drivers keep track of which clusters are currently in use, which are free, and which are marked as faulty. To clearly understand how data is stored on disks and how the OS provides access to them, it is necessary to understand, at least in general terms, the logical structure of the disk.


3.1.5 Disk logical structure

In order for a computer to store, read and write information, the hard drive must first be partitioned. Partitions are created on it using appropriate programs - this is called “partitioning the hard drive”. Without this partitioning, it will not be possible to install the operating system on the hard drive (although Windows XP and 2000 can be installed on an unpartitioned disk, they do this partitioning themselves during the installation process).

The hard drive can be divided into several partitions, each of which will be used independently. What is this for? One disk can contain several different operating systems located on different partitions. The internal structure of a partition allocated to any OS is completely determined by that operating system.

In addition, there are other reasons for partitioning a disk, for example:

Possibility of using disks with a capacity greater than MS DOS
32 MB;

If a disk is damaged, only the information that was on that disk is lost;

Reorganizing and unloading a small disk is easier and faster than a large one;

Each user can be assigned their own logical drive.

The operation of preparing a disk for use is called formatting, or initialization. All available disk space is divided into sides, tracks and sectors, with tracks and sides numbered starting from zero, and sectors starting from one. A set of tracks located at the same distance from the axis of a disk or a package of disks is called a cylinder. Thus, the physical address of the sector is determined by the following coordinates: track number (cylinder - C), disk side number (head - H), sector number - R, i.e. CHR.

The very first sector of the hard disk (C=0, H=0, R=1) contains the master boot record Master Boot Record. This entry does not occupy the entire sector, but only its initial part. The Master Boot Record is a non-system boot loader program.

At the end of the first sector of the hard drive is the disk partition table - Partition Table. This table contains four rows describing a maximum of four partitions. Each row in the table describes one section:

1) active section or not;

2) the number of the sector corresponding to the beginning of the section;

3) the number of the sector corresponding to the end of the section;

4) partition size in sectors;

5) operating system code, i.e. what OS does this partition belong to?

A partition is called active if it contains the operating system boot program. The first byte in the section element is the section activity flag (0 – inactive, 128 (80H) – active). It is used to determine whether the partition is system (bootable) and to force the operating system to boot from it when the computer starts. Only one section can be active. Small programs called boot managers may be located in the first sectors of the disk. They interactively ask the user which partition to boot from and adjust the partition activity flags accordingly. Since the Partition Table has four rows, there can be up to four different operating systems on the disk, therefore, the disk can contain several primary partitions belonging to different operating systems.

An example of the logical structure of a hard disk consisting of three partitions, two of which belong to DOS and one belongs to UNIX, is shown in Figure 3.2a.

Each active partition has its own boot record - a program that loads a given OS.

In practice, the disk is most often divided into two partitions. The sizes of partitions, whether they are declared active or not, are set by the user during the process of preparing the hard drive for use. This is done using special programs. In DOS this program is called FDISK, in Windows-XX versions it is called Diskadministrator.

In DOS, the primary partition is Primary Partition, this is the section that contains the operating system loader and the OS itself. Thus, the primary partition is the active partition, used as a logical drive named C:.

The WINDOWS operating system (namely WINDOWS 2000) has changed the terminology: the active partition is called the system partition, and the boot partition is the logical disk that contains the WINDOWS system files. The boot logical drive can be the same as the system partition, but it can be located on a different partition of the same hard drive or on a different hard drive.

Advanced section Extended Partition can be divided into several logical drives with names from D: to Z:.

Figure 3.2b shows the logical structure of a hard drive, which has only two partitions and four logical drives.

General. In computer science theory, the following three main types of data structures are defined: linear, tabular, hierarchical. Example book: sequence of sheets - linear structure. Parts, sections, chapters, paragraphs - hierarchy. Table of contents – table – connects – hierarchical with linear. Structured data has a new attribute - Address. So:

      Linear structures (lists, vectors). Regular lists. The address of each element is uniquely determined by its number. If all elements of the list have equal length – data vectors.

      Tabular structures (tables, matrices). The difference between a table and a list - each element - is determined by an address, consisting of not one, but several parameters. The most common example is a matrix - address - two parameters - row number and column number. Multidimensional tables.

      Hierarchical structures. Used to present irregular data. The address is determined by the route - from the top of the tree. File system - computer. (The route can exceed the data size, dichotomy - there are always two branches - left and right).

Ordering data structures. The main method is sorting. ! When adding a new element to an ordered structure, it is possible to change the address of existing ones. For hierarchical structures - indexing - each element has a unique number - which is then used in sorting and searching.

    Basic elements of a file system

The historical first step in data storage and management was the use of file management systems.

A file is a named area of ​​external memory that can be written to and read from. Three parameters:

    sequence of an arbitrary number of bytes,

    a unique proper name (actually an address).

    data of the same type – file type.

The rules for naming files, how the data stored in a file is accessed, and the structure of that data depend on the particular file management system and possibly on the file type.

The first, in the modern sense, developed file system was developed by IBM for its 360 series (1965-1966). But in current systems it is practically not used. Used list data structures (EC-volume, section, file).

Most of you are familiar with the file systems of modern operating systems. This is primarily MS DOS, Windows, and some with file system construction for various UNIX variants.

File structure. A file represents a collection of data blocks located on external media. To exchange with a magnetic disk at the hardware level, you need to specify the cylinder number, surface number, block number on the corresponding track and the number of bytes that need to be written or read from the beginning of this block. Therefore, all file systems explicitly or implicitly allocate some basic level that ensures work with files that represent a set of directly addressable blocks in the address space.

Naming files. All modern file systems support multi-level file naming by maintaining additional files with a special structure - directories - in external memory. Each directory contains the names of the directories and/or files contained in that directory. Thus, the full name of a file consists of a list of directory names plus the name of the file in the directory immediately containing the file. The difference between the way files are named on different file systems is where the chain of names begins. (Unix, DOS-Windows)

File protection. File management systems must provide authorization for access to files. In general, the approach is that in relation to each registered user of a given computer system, for each existing file, actions that are allowed or prohibited for this user are indicated. There have been attempts to implement this approach in full. But this caused too much overhead both in storing redundant information and in using this information to control access eligibility. Therefore, most modern file management systems use the file protection approach first implemented in UNIX (1974). In this system, each registered user is associated with a pair of integer identifiers: the identifier of the group to which this user belongs, and his own identifier in the group. Accordingly, for each file, the full identifier of the user who created this file is stored, and it is noted what actions he himself can perform with the file, what actions with the file are available to other users of the same group, and what users of other groups can do with the file. This information is very compact, requires few steps during verification, and this method of access control is satisfactory in most cases.

Multi-user access mode. If the operating system supports multi-user mode, it is quite possible for two or more users to simultaneously try to work with the same file. If all these users are only going to read the file, nothing bad will happen. But if at least one of them changes the file, mutual synchronization is required for this group to work correctly. Historically, file systems have taken the following approach. In the operation of opening a file (the first and mandatory operation with which a session of working with a file should begin), among other parameters, the operating mode (reading or changing) was indicated. + there are special procedures for synchronizing user actions. Not allowed by records!

    Journaling in file systems. General principles.

Running a system check (fsck) on large file systems can take a long time, which is unfortunate given today's high-speed systems. The reason why there is no integrity in the file system may be incorrect unmounting, for example, the disk was being written to at the time of termination. Applications could update the data contained in files, and the system could update file system metadata, which is “data about file system data,” in other words, information about which blocks are associated with which files, which files are located in which directories, and the like. . Errors (lack of integrity) in data files are bad, but much worse are errors in file system metadata, which can lead to file loss and other serious problems.

To minimize integrity issues and minimize system restart time, a journaled file system maintains a list of changes it will make to the file system before actually writing the changes. These records are stored in a separate part of the file system called a "journal" or "log". Once these journal (log) entries are securely written, the journaling file system makes these changes to the file system and then deletes these entries from the “log” (log). Log entries are organized into sets of related file system changes, much like the way changes added to a database are organized into transactions.

A journaled file system increases the likelihood of integrity because log file entries are made before changes are made to the file system, and because the file system retains those entries until they are fully and securely applied to the file system. When you reboot a computer that uses a journaled file system, the mount program can ensure the integrity of the file system by simply checking the log file for changes that were expected but not made and writing them to the file system. In most cases, the system does not need to check the integrity of the file system, which means that a computer using a journaled file system will be available for use almost immediately after a reboot. Accordingly, the chances of data loss due to problems in the file system are significantly reduced.

The classic form of a journaled file system is to store changes in file system metadata in a journal (log) and store changes to all file system data, including changes to the files themselves.

    File system MS-DOS (FAT)

The MS-DOS file system is a tree-based file system for small disks and simple directory structures, with the root being the root directory and the leaves being files and other directories, possibly empty. Files managed by this file system are placed in clusters, the size of which can range from 4 KB to 64 KB in multiples of 4, without using the adjacency property in a mixed way to allocate disk memory. For example, the figure shows three files. The File1.txt file is quite large: it involves three consecutive blocks. The small file File3.txt uses the space of only one allocated block. The third file is File2.txt. is a large fragmented file. In each case, the entry point points to the first allocable block owned by the file. If a file uses multiple allocated blocks, the previous block points to the next one in the chain. The value FFF is identified with the end of the sequence.

FAT disk partition

To access files efficiently, use file allocation table– File Allocation Table, which is located at the beginning of the partition (or logical drive). It is from the name of the allocation table that the name of this file system – FAT – comes from. To protect the partition, two copies of FAT are stored on it in case one of them becomes corrupted. In addition, file allocation tables must be placed at strictly fixed addresses so that the files necessary to start the system are located correctly.

The file allocation table consists of 16-bit elements and contains the following information about each logical disk cluster:

    the cluster is not used;

    the cluster is used by the file;

    bad cluster;

    last file cluster;.

Since each cluster must be assigned a unique 16-bit number, FAT therefore supports a maximum of 216, or 65,536 clusters on one logical disk (and also reserves some of the clusters for its own needs). Thus, we get the maximum disk size served by MS-DOS at 4 GB. The cluster size can be increased or decreased depending on the disk size. However, when the disk size exceeds a certain value, the clusters become too large, which leads to internal disk defragmentation. In addition to information about files, the file allocation table can also contain information about directories. This treats directories as special files with 32-byte entries for each file contained in that directory. The root directory has a fixed size - 512 entries for a hard disk, and for floppy disks this size is determined by the size of the floppy disk. Additionally, the root directory is located immediately after the second copy of the FAT because it contains the files needed by the MS-DOS boot loader.

When searching for a file on a disk, MS-DOS is forced to look through the directory structure to find it. For example, to run the executable file C:\Program\NC4\nc.exe finds the executable file by doing the following:

    reads the root directory of the C: drive and looks for the Program directory in it;

    reads the initial cluster Program and looks in this directory for an entry about the NC4 subdirectory;

    reads the initial cluster of the NC4 subdirectory and looks for an entry for the nc.exe file in it;

    reads all clusters of the nc.exe file.

This search method is not the fastest among current file systems. Moreover, the greater the depth of the directories, the slower the search will be. To speed up the search operation, you should maintain a balanced file structure.

Advantages of FAT

    It is the best choice for small logical drives, because... starts with minimal overhead. On disks whose size does not exceed 500 MB, it works with acceptable performance.

Disadvantages of FAT

    Since the size of a file entry is limited to 32 bytes, and the information must include the file size, date, attributes, etc., the size of the file name is also limited and cannot exceed 8+3 characters for each file. The use of so-called short file names makes FAT less attractive to use than other file systems.

    Using FAT on disks larger than 500 MB is irrational due to disk defragmentation.

    The FAT file system does not have any security features and supports minimal information security capabilities.

    The speed of operations in FAT is inversely proportional to the depth of directory nesting and disk space.

    UNIX file system - systems (ext3)

The modern, powerful and free Linux operating system provides a wide area for the development of modern systems and custom software. Some of the most exciting developments in recent Linux kernels are new, high-performance technologies for managing the storage, placement, and updating of data on disk. One of the most interesting mechanisms is the ext3 file system, which has been integrated into the Linux kernel since version 2.4.16, and is already available by default in Linux distributions from Red Hat and SuSE.

The ext3 file system is a journaling file system, 100% compatible with all utilities created to create, manage and fine-tune the ext2 file system, which has been used on Linux systems for the last several years. Before describing in detail the differences between the ext2 and ext3 file systems, let us clarify the terminology of file systems and file storage.

At the system level, all data on a computer exists as blocks of data on some storage device, organized using special data structures into partitions (logical sets on a storage device), which in turn are organized into files, directories and unused (free) space.

File systems are created on disk partitions to simplify the storage and organization of data in the form of files and directories. Linux, like the Unix system, uses a hierarchical file system made up of files and directories, which respectively contain either files or directories. Files and directories in a Linux file system are made available to the user by mounting them (the "mount" command), which is usually part of the system boot process. The list of file systems available for use is stored in the /etc/fstab file (FileSystem TABle). The list of file systems not currently mounted by the system is stored in the /etc/mtab (Mount TABle) file.

When a filesystem is mounted during boot, a bit in the header (the "clean bit") is cleared, indicating that the filesystem is in use, and that the data structures used to control the placement and organization of files and directories within that filesystem can be changed.

A file system is considered complete if all data blocks in it are either in use or free; each allocated data block is occupied by one and only one file or directory; all files and directories can be accessed after processing a series of other directories in the file system. When a Linux system is deliberately shut down using operator commands, all file systems are unmounted. Unmounting a file system during shutdown sets a "clean bit" in the file system header, indicating that the file system was properly unmounted and can therefore be considered intact.

Years of file system debugging and redesign and the use of improved algorithms for writing data to disk have greatly reduced data corruption caused by applications or the Linux kernel itself, but eliminating corruption and data loss due to power outages and other system problems is still a challenge. In the event of a crash or a simple shutdown of a Linux system without using standard shutdown procedures, the “clean bit” is not set in the file system header. The next time the system boots, the mount process detects that the system is not marked as "clean" and physically checks its integrity using the Linux/Unix file system check utility "fsck" (File System CheckK).

There are several journaling file systems available for Linux. The most famous of them are: XFS, a journaling file system developed by Silicon Graphics, but now released as open source; RaiserFS, a journaling file system designed specifically for Linux; JFS, a journaling file system originally developed by IBM but now released as open source; ext3 is a file system developed by Dr. Stephan Tweedie at Red Hat, and several other systems.

The ext3 file system is a journaled Linux version of the ext2 file system. The ext3 file system has one significant advantage over other journaling file systems - it is fully compatible with the ext2 file system. This makes it possible to use all existing applications designed to manipulate and customize the ext2 file system.

The ext3 filesystem is supported by Linux kernels version 2.4.16 and later, and must be enabled using the Filesystems Configuration dialog when building the kernel. Linux distributions such as Red Hat 7.2 and SuSE 7.3 already include native support for the ext3 file system. You can only use the ext3 filesystem if ext3 support is built into your kernel and you have the latest versions of the "mount" and "e2fsprogs" utilities.

In most cases, converting file systems from one format to another entails backing up all contained data, reformatting the partitions or logical volumes containing the file system, and then restoring all data to that file system. Due to the compatibility of the ext2 and ext3 file systems, all these steps do not need to be carried out, and the translation can be done using a single command (run with root privileges):

# /sbin/tune2fs -j<имя-раздела >

For example, converting an ext2 file system located on the /dev/hda5 partition to an ext3 file system can be done using the following command:

# /sbin/tune2fs -j /dev/hda5

The "-j" option to the "tune2fs" command creates an ext3 journal on an existing ext2 filesystem. After converting the ext2 file system to ext3, you must also make changes to the /etc/fstab file entries to indicate that the partition is now an "ext3" file system. You can also use auto detection of the partition type (the “auto” option), but it is still recommended to explicitly specify the file system type. The following example /etc/fstab file shows the changes before and after a file system transfer for the /dev/hda5 partition:

/dev/ hda5 /opt ext2 defaults 1 2

/dev/ hda5 /opt ext3 defaults 1 0

The last field in /etc/fstab specifies the step in the boot process during which the integrity of the file system should be checked using the "fsck" utility. When using ext3 file system, you can set this value to "0" as shown in the previous example. This means that the "fsck" program will never check the integrity of the filesystem, due to the fact that the integrity of the filesystem is guaranteed by rolling back the journal.

Converting the root file system to ext3 requires a special approach, and is best done in single user mode after creating a RAM disk that supports the ext3 file system.

In addition to being compatible with ext2 file system utilities and easy file system translation from ext2 to ext3, the ext3 file system also offers several different types of journaling.

The ext3 file system supports three different journaling modes that can be activated from the /etc/fstab file. These logging modes are as follows:

    Journal - records all changes to file system data and metadata. The slowest of all three logging modes. This mode minimizes the chance of losing file changes you make to the file system.

    Sequential/ordered – Writes changes to filesystem metadata only, but writes file data updates to disk before changes to associated filesystem metadata. This ext3 logging mode is installed by default.

    Writeback - only changes to file system metadata are written, based on the standard process for writing changes to file data. This is the fastest logging method.

The differences between these logging modes are both subtle and profound. Using journal mode requires the ext3 file system to write every change to the file system twice - first to the journal and then to the file system itself. This can reduce the overall performance of your file system, but this mode is most loved by users because it minimizes the chance of losing data changes to your files, since both meta data changes and file data changes are written to the ext3 log and can be repeated when the system is rebooted.

Using the "sequential" mode, only changes to file system metadata are recorded, which reduces the redundancy between writing to the file system and to the journal, which is why this method is faster. Although changes to file data are not written to the journal, they must be made before changes to the associated filesystem metadata are made by the ext3 journaling daemon, which may slightly reduce the performance of your system. Using this journaling method ensures that files on the file system are never out of sync with the associated file system metadata.

The writeback method is faster than the other two journaling methods because it only stores changes to file system metadata, and does not wait for the file's associated data to change on write (before updating things like file size and directory information). Since file data is updated asynchronously with respect to journaled changes to the file system's metadata, files in the file system may show errors in the metadata, for example, an error in indicating the owner of data blocks (the update of which was not completed at the time the system was rebooted). This is not fatal, but may interfere with the user's experience.

Specifying the journaling mode used on an ext3 file system is done in the /etc/fstab file for that file system. "Sequential" mode is the default, but you can specify different logging modes by changing the options for the desired partition in the /etc/fstab file. For example, an entry in /etc/fstab indicating use of the writeback logging mode would look like this:

/dev/hda5 /opt ext3 data=writeback 1 0

    Windows NT Family File System (NTFS)

      Physical structure of NTFS

Let's start with general facts. An NTFS partition, in theory, can be almost any size. Of course, there is a limit, but I won’t even indicate it, since it will be sufficient for the next hundred years of development of computer technology - at any growth rate. How does this work in practice? Almost the same. The maximum size of an NTFS partition is currently limited only by the size of the hard drives. NT4, however, will experience problems when trying to install on a partition if any part of it is more than 8 GB from the physical beginning of the disk, but this problem only affects the boot partition.

Lyrical digression. The method of installing NT4.0 on an empty disk is quite original and can lead to the wrong thoughts about the capabilities of NTFS. If you tell the installer that you want to format the drive to NTFS, the maximum size it will offer you is only 4GB. Why so small if the size of an NTFS partition is actually practically unlimited? The fact is that the installation section simply does not know this file system :) The installation program formats this disk into a regular FAT, the maximum size of which in NT is 4 GB (using a not quite standard huge 64 KB cluster), and NT installs on this FAT . But already during the first boot of the operating system itself (still in the installation phase), the partition is quickly converted to NTFS; so the user does not notice anything except the strange “limitation” on the NTFS size during installation. :)

      Section structure - general view

Like any other system, NTFS divides all useful space into clusters - blocks of data used at a time. NTFS supports almost any cluster size - from 512 bytes to 64 KB, while a 4 KB cluster is considered a certain standard. NTFS does not have any anomalies in the cluster structure, so there is not much to say on this, in general, rather banal topic.

An NTFS disk is conventionally divided into two parts. The first 12% of the disk is allocated to the so-called MFT zone - the space into which the MFT metafile grows (more on this below). It is not possible to write any data to this area. The MFT zone is always kept empty - this is done so that the most important service file (MFT) does not become fragmented as it grows. The remaining 88% of the disk is normal file storage space.

Free disk space, however, includes all physically free space - unfilled pieces of the MFT zone are also included there. The mechanism for using the MFT zone is as follows: when files can no longer be written to regular space, the MFT zone is simply reduced (in current versions of operating systems by exactly half), thus freeing up space for writing files. When space is freed up in the regular MFT area, the area may expand again. At the same time, it is possible that ordinary files remain in this zone: there is no anomaly here. Well, the system tried to keep her free, but nothing worked. Life goes on... The MFT metafile may still become fragmented, although this would be undesirable.

      MFT and its structure

The NTFS file system is an outstanding achievement of structuring: every element of the system is a file - even service information. The most important file on NTFS is called MFT, or Master File Table - a general table of files. It is located in the MFT zone and is a centralized directory of all other disk files, and, paradoxically, itself. The MFT is divided into fixed-size entries (usually 1 KB), and each entry corresponds to a file (in the general sense of the word). The first 16 files are of a service nature and are inaccessible to the operating system - they are called metafiles, with the very first metafile being the MFT itself. These first 16 MFT elements are the only part of the disk that has a fixed position. Interestingly, the second copy of the first three records, for reliability (they are very important), is stored exactly in the middle of the disk. The rest of the MFT file can be located, like any other file, in arbitrary places on the disk - you can restore its position using the file itself, “hooking” on the very basis - the first MFT element.

        Metafiles

The first 16 NTFS files (metafiles) are of a service nature. Each of them is responsible for some aspect of the system's operation. The advantage of such a modular approach is its amazing flexibility - for example, on FAT, physical damage in the FAT area itself is fatal to the functioning of the entire disk, and NTFS can shift, even fragment across the disk, all of its service areas, bypassing any surface faults - except for the first 16 MFT elements.

Metafiles are located in the root directory of an NTFS disk - they begin with the name symbol "$", although it is difficult to obtain any information about them using standard means. It is curious that these files also have a very real size indicated - you can find out, for example, how much the operating system spends on cataloging your entire disk by looking at the size of the $MFT file. The following table shows the currently used metafiles and their purpose.

a copy of the first 16 MFT records placed in the middle of the disk

logging support file (see below)

service information - volume label, file system version, etc.

list of standard file attributes on the volume

root directory

volume free space map

boot sector (if the partition is bootable)

a file that records user rights to use disk space (started to work only in NT5)

file - a table of correspondence between uppercase and lowercase letters in file names on the current volume. It is needed mainly because in NTFS file names are written in Unicode, which amounts to 65 thousand different characters, searching for large and small equivalents of which is very non-trivial.

        Files and streams

So, the system has files - and nothing but files. What does this concept include on NTFS?

    First of all, a mandatory element is recording in MFT, because, as mentioned earlier, all disk files are mentioned in MFT. All information about the file is stored in this place, with the exception of the data itself. File name, size, location on disk of individual fragments, etc. If one MFT record is not enough for information, then several are used, and not necessarily in a row.

    Optional element - file data streams. The definition of “optional” may seem strange, but, nevertheless, there is nothing strange here. Firstly, the file may not have data - in this case, it does not consume the free space of the disk itself. Secondly, the file may not be very large. Then a rather successful solution comes into play: the file data is stored directly in the MFT, in the space remaining from the main data within one MFT record. Files that occupy hundreds of bytes usually do not have their “physical” embodiment in the main file area - all the data of such a file is stored in one place - in the MFT.

The situation with the file data is quite interesting. Each file on NTFS, in general, has a somewhat abstract structure - it does not have data as such, but there are streams. One of the streams has the meaning we are familiar with - file data. But most file attributes are also streams! Thus, it turns out that the file has only one basic entity - the number in MFT, and everything else is optional. This abstraction can be used to create quite convenient things - for example, you can “attach” another stream to a file by writing any data into it - for example, information about the author and contents of the file, as is done in Windows 2000 (the rightmost tab in the file properties, viewed from Explorer). Interestingly, these additional streams are not visible by standard means: the observed file size is only the size of the main stream that contains the traditional data. You can, for example, have a file of zero length, which, when erased, will free up 1 GB of free space - simply because some cunning program or technology has stuck an additional gigabyte-sized stream (alternative data) in it. But in fact, at the moment, threads are practically not used, so one should not be afraid of such situations, although hypothetically they are possible. Just keep in mind that a file on NTFS is a deeper and more global concept than one might imagine by simply browsing the disk's directories. And finally: the file name can contain any characters, including the entire set of national alphabets, since the data is presented in Unicode - a 16-bit representation that gives 65535 different characters. The maximum file name length is 255 characters.

      Catalogs

An NTFS directory is a specific file that stores links to other files and directories, creating a hierarchical structure of data on the disk. The catalog file is divided into blocks, each of which contains the file name, basic attributes and a link to the MFT element, which already provides complete information about the catalog element. The internal directory structure is a binary tree. Here's what this means: to find a file with a given name in a linear directory, such as a FAT, the operating system has to look through all the elements of the directory until it finds the right one. A binary tree arranges file names in such a way that searching for a file is carried out in a faster way - by obtaining two-digit answers to questions about the location of the file. The question that a binary tree can answer is: in which group, relative to a given element, is the name you are looking for - above or below? We start with such a question to the middle element, and each answer narrows the search area by an average of two times. The files are, say, simply sorted alphabetically, and the question is answered in the obvious way - by comparing the initial letters. The search area, narrowed by half, begins to be explored in a similar way, starting again from the middle element.

Conclusion - to search for one file among 1000, for example, FAT will have to make an average of 500 comparisons (it is most likely that the file will be found in the middle of the search), and a tree-based system will have to make only about 10 (2^10 = 1024). Search time savings are obvious. However, you should not think that in traditional systems (FAT) everything is so neglected: firstly, maintaining a list of files in the form of a binary tree is quite labor-intensive, and secondly, even FAT performed by a modern system (Windows2000 or Windows98) uses similar optimization search. This is just another fact to add to your knowledge base. I would also like to dispel the common misconception (which I myself shared quite recently) that adding a file to a directory in the form of a tree is more difficult than to a linear directory: these are quite comparable operations in time - the fact is that in order to add a file to the directory, you first need to make sure that a file with that name is not there yet :) - and here in a linear system we will have the difficulties with finding a file, described above, which more than compensate for the very simplicity of adding a file to the directory.

What information can be obtained by simply reading a catalog file? Exactly what the dir command produces. To perform simple disk navigation, you do not need to go into MFT for each file, you just need to read the most general information about files from directory files. The main directory of the disk - the root - is no different from ordinary directories, except for a special link to it from the beginning of the MFT metafile.

      Logging

NTFS is a fault-tolerant system that can restore itself to a correct state in the event of almost any real failure. Any modern file system is based on the concept of a transaction - an action performed entirely and correctly or not performed at all. NTFS simply does not have intermediate (erroneous or incorrect) states - the quantum of data change cannot be divided into before and after the failure, bringing destruction and confusion - it is either committed or canceled.

Example 1: data is being written to disk. Suddenly it turns out that it was not possible to write to the place where we had just decided to write the next portion of data - physical damage to the surface. The behavior of NTFS in this case is quite logical: the write transaction is rolled back entirely - the system realizes that the write was not performed. The location is marked as failed, and the data is written to another location - a new transaction begins.

Example 2: a more complex case - data is being written to disk. Suddenly, bang - the power is turned off and the system reboots. At what phase did the recording stop, where is the data, and where is nonsense? Another system mechanism comes to the rescue - the transaction log. The fact is that the system, realizing its desire to write to disk, marked this state in the $LogFile metafile. When rebooting, this file is examined for the presence of unfinished transactions that were interrupted by an accident and the result of which is unpredictable - all these transactions are canceled: the place where the write was made is marked again as free, indexes and MFT elements are returned to the state in which they were before failure, and the system as a whole remains stable. Well, what if an error occurred while writing to the log? It’s also okay: the transaction either hasn’t started yet (there is only an attempt to record the intentions to carry it out), or it has already ended - that is, there is an attempt to record that the transaction has actually already been completed. In the latter case, at the next boot, the system itself will fully understand that in fact everything was written correctly anyway, and will not pay attention to the “unfinished” transaction.

Still, remember that logging is not an absolute panacea, but only a means to significantly reduce the number of errors and system failures. It is unlikely that the average NTFS user will ever notice a system error or be forced to run chkdsk - experience shows that NTFS is restored to a completely correct state even in case of failures at moments very busy with disk activity. You can even optimize the disk and press reset in the middle of this process - the likelihood of data loss even in this case will be very low. It is important to understand, however, that the NTFS recovery system guarantees the correctness of the file system, not your data. If you were writing to a disk and got a crash, your data may not be written. There are no miracles.

NTFS files have one quite useful attribute - "compressed". The fact is that NTFS has built-in support for disk compression - something for which you previously had to use Stacker or DoubleSpace. Any file or directory can be individually stored on disk in compressed form - this process is completely transparent to applications. File compression has a very high speed and only one big negative property - the huge virtual fragmentation of compressed files, which, however, does not really bother anyone. Compression is carried out in blocks of 16 clusters and uses so-called “virtual clusters” - again an extremely flexible solution that allows you to achieve interesting effects - for example, half of the file can be compressed, and half cannot. This is achieved due to the fact that storing information about the compression of certain fragments is very similar to regular file fragmentation: for example, a typical record of the physical layout for a real, uncompressed file:

file clusters from 1 to 43 are stored in disk clusters starting from 400, file clusters from 44 to 52 are stored in disk clusters starting from 8530...

Physical layout of a typical compressed file:

file clusters from 1 to 9 are stored in disk clusters starting from 400 file clusters from 10 to 16 are not stored anywhere file clusters from 17 to 18 are stored in disk clusters starting from 409 file clusters from 19 to The 36th is not stored anywhere....

It can be seen that the compressed file has “virtual” clusters, in which there is no real information. As soon as the system sees such virtual clusters, it immediately understands that the data from the previous block, a multiple of 16, must be decompressed, and the resulting data will just fill the virtual clusters - that, in fact, is the whole algorithm.

      Safety

NTFS contains many means of delineating the rights of objects - it is believed that this is the most advanced file system of all currently existing. In theory, this is undoubtedly true, but in current implementations, unfortunately, the system of rights is quite far from ideal and, although rigid, is not always a logical set of characteristics. The rights assigned to any object and clearly respected by the system are evolving - major changes and additions to rights have been made several times already, and by Windows 2000 they have finally arrived at a fairly reasonable set.

The rights of the NTFS file system are inextricably linked with the system itself - that is, they, generally speaking, are not required to be respected by another system if it is given physical access to the disk. To prevent physical access, Windows 2000 (NT5) still introduced a standard feature - see below for more on this. The system of rights in its current state is quite complex, and I doubt that I can tell the general reader anything interesting and useful to him in everyday life. If you are interested in this topic, you will find many books on NT network architecture that describe this in more detail.

At this point the description of the structure of the file system can be completed; it remains to describe only a certain number of simply practical or original things.

This thing has been in NTFS since time immemorial, but was used very rarely - and yet: Hard Link is when the same file has two names (several file-directory pointers or different directories point to the same MFT record) . Let's say the same file has the names 1.txt and 2.txt: if the user deletes file 1, file 2 will remain. If he deletes 2, file 1 will remain, that is, both names, from the moment of creation, are completely equal. The file is physically erased only when its last name is deleted.

      Symbolic Links (NT5)

A much more practical feature that allows you to create virtual directories - exactly the same as virtual disks using the subst command in DOS. The applications are quite varied: firstly, simplifying the catalog system. If you don't like the Documents and settings\Administrator\Documents directory, you can link it to the root directory - the system will still communicate with the directory with a wild path, and you will have a much shorter name that is completely equivalent to it. To create such connections, you can use the junction program (junction.zip(15 Kb), 36 kb), written by the famous specialist Mark Russinovich (http://www.sysinternals.com). The program only works in NT5 (Windows 2000), as does the feature itself. To delete a connection, you can use the standard rd command. WARNING: Attempting to delete a link using Explorer or other file managers that do not understand the virtual nature of a directory (such as FAR) will delete the data referenced by the link! Be careful.

      Encryption (NT5)

A useful feature for people who are concerned about their secrets - each file or directory can also be encrypted, making it impossible for another NT installation to read it. Combined with a standard and virtually unbreakable password for booting the system itself, this feature provides sufficient security for most applications for the important data you select.

The operating system, which is the basis for the operation of any computer equipment, organizes work with electronic data, following a certain algorithm, in the chain of which the file system is not unclaimed. What a file system is in general, and what types of it are applicable in modern times, we will try to explain in this article.

Description of general file system characteristics

FS- this, as indicated above, is part of the operating system, which is directly related to the placement, deletion, movement of electronic information on a specific medium, as well as the security of its further use in the future. This resource is also applicable in cases where it is necessary to restore lost information due to a software failure as such. That is, it is the main tool for working with electronic files.

Types of file system

Each computer device uses a special type of file system. The following types are particularly common:

Designed for hard drives;
- intended for magnetic tapes;
- intended for optical media;
- virtual;
- network.

Naturally, the main logical unit of working with electronic data is a file, which means a document with information of a certain nature systematized in it, which has its own name, which makes it easier for the user to work with a large flow of electronic documents.
So, absolutely everything used by the operating system is transformed into files, regardless of whether it is text or images, or sound, or video, or photos. Among other things, drivers and software libraries also have transcriptions of them.

Each information unit has a name, a specific extension, size, inherent characteristics, and type. But the FS is their totality, as well as the principles of working with all of them.

Depending on what specific features are inherent in the system, it will work effectively with such data. And this is a prerequisite for classifying it into types and species.

A look at the file system from a programming perspective

When studying the concept of a file system, you should understand that this is a multi-level component, the first of which is dominated by a file system transformer, which ensures effective interaction between the system itself and a specific software application. It is he who is responsible for converting the request for electronic data into a specific format, which is recognized by the drivers, which entails effective work with files, that is, they are accessible.

Modern applications that have a client-server standard have very high FS requirements. After all, modern systems simply must provide the most efficient access to all available types of electronic units, as well as provide tremendous support for large-volume media, as well as protect all data from unwanted access by other users, as well as ensure the integrity of information stored in electronic format.

Below we will look at all the existing FSs and their advantages and disadvantages.

FAT
This is the oldest type of file system, which was developed back in 1977. It worked with OS 86-DOS and is not capable of working with hard storage media, and is designed for flexible types and storage of information up to one megabyte. If limiting the size of information is not relevant today, then other indicators remain in demand unchanged.

This file system was used by the leading software application developer company, Microsoft, for such operating systems as MS-DOS 1.0.
The files of this system have a number of characteristic properties:

The name of the information unit must contain a letter or number at the beginning, and further contents of the name may include various computer keyboard symbols;
- the file name must not exceed eight characters; a dot is placed at the end of the name, followed by a three-letter extension;
- any keyboard layout register can be used to create a file name.

From the very beginning of its development, the FAT file system was aimed at working with the DOS operating system; it was not interested in saving data about the user or owner of the information.

Thanks to various modifications of this FS, it has become the most popular in modern times and the most innovative operating systems operate on its basis.

It is this file system that is capable of saving files unchanged if computer equipment is turned off incorrectly due to, for example, the battery not being charged or the lights being turned off.

Many operating systems with which FAT works contain certain software utilities that correct and check the file system content tree and files itself.

NTFS
The modern NTFS file system works with the Windows NT operating system; in principle, it was aimed at it. It includes the convert utility, which is responsible for converting volumes from HPFS or FAT format to NTFS volume format.

It is more modernized compared to the first option described above. This version has expanded the capabilities with regard to direct access control to all information units. Here you can use many useful attributes, dynamic file compression, and fault tolerance. One of its advantages is its support for the requirements of the POSIX standard.

This file system allows you to create information files with names up to 255 characters long.

If the operating system that works with this file system fails, then there is no need to worry about the safety of all files. They remain intact and unharmed, since this type of file system has the property of self-healing.

A feature of the NTFS file system is its structure, which is presented in the form of a specific table. The first sixteen entries in the registry are the contents of the file system itself. Each individual electronic unit also has the form of a table, which contains information about the table, a mirror file in MFT format, a registration file used when it is necessary to restore information, and subsequent data - this is information about the file itself and its data that was saved directly on the hard drive.

All executed commands with files tend to be saved, which helps the system subsequently recover on its own after a failure of the operating system with which it is working.

EFS
A very common file system is EFS, which is considered encrypted. It works with the Windows operating system. This system causes files to be stored on the hard drive in encrypted form. This is the most effective protection for all files.
Encryption is set in the file properties using a checkbox next to the tab indicating the possibility of encryption. Using this function, you can specify who can view files, that is, who is allowed to work with them.

RAW
File elements are the most vulnerable units of programming. After all, they are the information that is stored on computer disks. They can be damaged, removed, hidden. In general, the user's work is solely aimed at creating, saving and moving them.
The operating system does not always show the ideal properties of its operation and has a tendency to fail. This happens for many reasons. But that’s not about that now.

Many users are faced with a notification that the RAW system is damaged. Is this really FS or not? Many people ask this question. It turns out that this is not entirely true. If we explain it at the level of a programming language, then RAW is an error, namely a logical error that has already been introduced into the Windows operating system in order to protect it from failure. If the equipment gives any messages about RAW, then you need to keep in mind that the structure of the file system is at risk, it is not working correctly or is in danger of gradual destruction.

If such a problem is obvious, then you will not be able to access a single file on the computer, and it will also refuse to execute other operational commands.

UDF
This is a file system for optical disks, which has its own characteristics:

File names must not exceed 255 characters;
- the nominal case can be either lower or upper.

It works with the Windows XP operating system.

EXFAT
And another modern file system is EXFAT, which is a kind of intermediary between Windows and Linux, ensuring the effective transformation of files from one system to another, since their file hosting services are different. It is used on portable storage devices, such as flash drives.

The file system is the system that is used by the Windows operating system. It is necessary for organizing and storing data on any disk. It is she who is responsible for storing data on the hard drive. Let's look at what a file system is and what types of such systems exist.

Why do we need a file system?

You can understand what file system is used on your computer by going to the folder called “My Computer”. Then you need to right-click and select “properties”. In the information window that appears, you can read the following inscription: File system: (name).

It is not at all necessary that each disk will have the same file system. To find out, you need to look at each disc.

The security of your personal computer will depend on the correct choice of file system, and the operating system will not crash and lose data. Let's look at what file systems can be found in Windows.

Types of file systems

FAT

The first thing we'll look at is a file system called FAT. Today it is extremely rare, so it is not worth dwelling on it in detail. Its biggest drawback is the maximum disk capacity, which is only 2 GB, which is practically never found in modern hardware. Thus, if your disk has a larger capacity, then it stops working. A few years ago, 2 GB was the standard hard drive capacity, and this file system was used perfectly there. But today it has outlived its usefulness and has taken an honorable place in the dustbin of history.

The next file system is the famous FAT 32. 32 is the system bit size. This version is an updated version of the previous file system. If you are using an earlier version of Windows, you may have some problems formatting the drive. However, this system is much more stable than its predecessor, and working with files will proceed much faster.

NTFS

Now let's look at what the NTFS file system is. This file storage system appeared relatively recently and is more modern than the previous two. However, despite the huge number of advantages, it is not without its disadvantages. Most disks produced by commercial companies today have just such a file system. It stores data much better, but is quite demanding on your computer's resources.

In addition, in the case when the logical disk has a full load of up to 90 percent, the performance of the file system is sharply reduced. Also, if the operating system is older than Windows XP, then such a file system will simply refuse to work on it. Once you insert a disk into the drive, your computer simply will not be able to recognize it or will be marked as an unknown partition. Speaking about the advantages, it can be noted that the work of such a file system with small files is much faster and of better quality. The largest size a disk can have is 18 TB. There is also such a thing as file fragmentation. With it, the file system will not slow down, but will continue to operate as usual. Also, when using NTFS, you can be completely and completely sure that the file will not be damaged. The system uses disk space very economically and allows you to compress files to a minimum size without damaging them at all. By the way, it was thanks to this system that it became possible to restore data in case of loss. Accordingly, if we compare this system with FAT, then all the advantages are obvious. The most important thing that it can offer you is safety.

UDF

Now it’s time to look at what the UDF file system is. This is a file system that is independent of the computer's operating system and is used to save data stored on optical media. Unlike previous systems, UDF allows additional information to be written to an already full floppy disk. Also, this file system can selectively erase certain files on the disk without damaging other information. Metadata such as the root territory is located chaotically inside the disk, but the basis of this data has three places: sector 256, 257 and N-1, in this case N is the size of the track.

For DVD discs, UDF is the most successful file system because it has absolutely no restrictions on file sizes. You can record both large and small videos.

It was thanks to UDF that we learned what the final file system is and how to choose it correctly for your computer.

Good day, dear user, in this article we will talk about such a topic as files. Namely, we will look at: File management, file types, file structure, file attributes.

File system

One of the main tasks of the OS is to provide convenience to the user when working with data stored on disks. To do this, the OS replaces the physical structure of the stored data with some user-friendly logical model, which is implemented in the form of a directory tree displayed on the screen by utilities such as Norton Commander, Far Manager or Windows Explorer. The basic element of this model is file, which is the same as file system in general, can be characterized by both logical and physical structure.

File management

File– a named area of ​​external memory intended for reading and writing data.

Files are stored in power-independent memory. An exception is an electronic disk, when a structure that imitates a file system is created in the OP.

File system(FS) is an OS component that provides organization for the creation, storage and access to named data sets - files.

The file system includes: The file system includes:

  • The collection of all files on the disk.
  • Sets of data structures used to manage files (file directories, file descriptors, free and used disk space allocation tables).
  • A set of system software tools that implement various operations on files: creation, destruction, reading, writing, naming, searching.

The problems solved by the FS depend on the way the computing process is organized as a whole. The simplest type is a file system in single-user and single-program operating systems. The main functions in such a FS are aimed at solving the following tasks:

  • Naming files.
  • Software interface for applications.
  • Mapping the logical file system model onto the physical organization of the data warehouse.
  • FS resistance to power failures, hardware and software errors.

FS tasks become more complex in single-user multitasking operating systems, which are designed for the work of one user, but make it possible to run several processes simultaneously. To the tasks listed above, a new task is added - shared access to a file from several processes.

The file in this case is a shared resource, which means the FS must solve the whole range of problems associated with such resources. In particular: there must be means for blocking a file and its parts, reconciling copies, preventing races, and eliminating deadlocks. In multi-user systems, another task appears: Protecting the files of one user from unauthorized access by another user.

The functions of the FS, which operates as part of a network OS, become even more complex; it needs to organize protection files one user from unauthorized access of another user.

Main purpose file system and corresponding to it file management systems– organization of convenient management of files organized as files: instead of low-level access to data indicating the specific physical addresses of the record we need, logical access is used indicating the file name and record in it.

The terms “file system” and “file management system” must be distinguished: the file system determines, first of all, the principles of access to data organized as files. And the term “file management system” should be used in relation to a specific implementation of the file system, i.e. This is a set of software modules that provide work with files in a specific OS.

Example

The FAT (file allocation table) file system has many implementations as a file management system

  • The system developed for the first PCs was simply called FAT (now called simply FAT-12). It was designed to work with floppy disks, and for some time it was used to work with hard drives.
  • Then it was improved to work with larger hard drives, and this new implementation was called FAT-16. this name is also used in relation to the SUF of MS-DOS itself.
  • The implementation of SUF for OS/2 is called super-FAT (the main difference is the ability to support extended attributes for each file).
  • There is a version of SUF for Windows 9x/NT, etc. (FAT-32).

File types

Regular files: contain information of an arbitrary nature that is entered into them by the user or that is generated as a result of the operation of system and user programs. The contents of a regular file are determined by the application that works with it.

Regular files can be of two types:

  1. Software(executable) – are programs written in the OS command language and perform some system functions (have extensions .exe, .com, .bat).
  2. Data files– all other types of files: text and graphic documents, spreadsheets, databases, etc.

Catalogs- this is, on the one hand, a group of files combined by the user based on certain considerations (for example, files containing game programs, or files that make up one software package), and on the other hand, this is a special type of files that contain system help information about a set of files grouped by users according to some informal criteria (file type, location on disk, access rights, date of creation and modification).

Special files– these are dummy files associated with input/output devices, which are used to unify the mechanism for accessing files and external devices. Special files allow the user to perform I/O operations using normal file write or file read commands. These commands are processed first by FS programs, and then at some stage of the request execution they are converted by the OS into control commands for the corresponding device (PRN, LPT1 - for the printer port (symbolic names, for the OS - these are files), CON - for the keyboard).

Example. Copy con text1 (working with the keyboard).

File structure

File structure– the entire set of files on the disk and the relationships between them (the order in which files are stored on the disk).

Types of file structures:

  • simple, or single-level: A directory is a linear sequence of files.
  • hierarchical or multi-level: A directory itself can be part of another directory and contain many files and subdirectories within it. The hierarchical structure can be of two types: “Tree” and “Network”. Directories form a “Tree” if the file is allowed to be included in only one directory (OS MS-DOS, Windows) and “Network” - if the file can be included in several directories at once (UNIX).
  • The file structure can be represented as a graph describing the hierarchy of directories and files:



File name types

Files are identified by names. Users give files symbolic names, this takes into account OS restrictions on both the characters used and the length of the name. In early file systems these boundaries were quite narrow. So in popular FAT file system The length of names is limited by the well-known 8.3 scheme (8 characters - the name itself, 3 characters - the name extension), and in UNIX System V the name cannot contain more than 14 characters.

However, it is much more convenient for the user to work with long names, since they allow you to give the file a truly mnemonic name, by which, even after a fairly long period of time, you can remember what this file contains. Therefore, modern file systems tend to support long symbolic file names.

For example, Windows NT specifies in its NTFS file system that a file name can be up to 255 characters long, not counting the terminating null character.

Moving to long names creates a compatibility issue with previously created applications that use short names. In order for applications to access files according to previously accepted conventions, the file system must be able to provide equivalent short names (aliases) to files that have long names. Thus, one of the important tasks becomes the problem of generating appropriate short names.

Symbolic names can be of three types: simple, compound and relative:

  1. Simple name identifies a file within one directory, assigned to files taking into account the symbol nomenclature and name length.
  2. Full name is a chain of simple symbolic names of all directories through which the path from the root to a given file, disk name, file name passes. So the full name is composite, in which simple names are separated from each other by a separator accepted in the OS.
  3. The file can also be identified relative name. The relative file name is determined through the concept of "current directory". At any given time, one of the directories is current, and this directory is selected by the user himself at the command of the OS. The file system captures the name of the current directory so that it can then use it as a complement to relative names to form the fully qualified file name.

In a tree-like file structure, there is a one-to-one correspondence between a file and its full name - “one file - one full name”. In a network file structure, a file can be included in several directories, which means it can have several full names; The correspondence here is “one file - many full names.”

For file 2.doc, define all three types of name, provided that the current directory is the 2008_year directory.

  • Simple name: 2.doc
  • Full name: C:\2008_year\Documents\2.doc
  • Relative name: Documents\2.doc

File attributes

An important characteristic of a file is its attributes. Attributes– this is information describing the properties of files. Examples of possible file attributes:

  • Read-Only attribute;
  • Sign “hidden file” (Hidden);
  • Sign “system file” (System);
  • Sign “archive file” (Archive);
  • File type (regular file, directory, special file);
  • Owner of the file;
  • File Creator;
  • Password to access the file;
  • Information about permitted file access operations;
  • Time of creation, last access and last change;
  • Current file size;
  • Maximum file size;
  • Sign “temporary (remove after process completion)”;
  • Blocking sign.

In different types of file systems, different sets of attributes may be used to characterize files (for example, in a single-user OS, the set of attributes will not contain characteristics related to the user and security (the creator of the file, the password for accessing the file, etc.).

The user can access attributes using the facilities provided for this purpose by the file system. Typically, you can read the values ​​of any attributes, but only change some, for example, you can change the access rights of a file, but you cannot change the creation date or the current size of the file.

File permissions

Defining access rights to a file means defining for each user a set of operations that he can apply to a given file. Different file systems can have their own list of differentiated access operations. This list may include the following operations:

  • file creation.
  • file destruction.
  • writing to a file.
  • opening a file.
  • closing the file.
  • reading from file.
  • file addition.
  • search in the file.
  • getting file attributes.
  • setting new attribute values.
  • renaming.
  • file execution.
  • reading a catalog, etc.

In the most general case access rights can be described by a matrix of access rights, in which the columns correspond to all files in the system, the rows correspond to all users, and at the intersection of rows and columns the permitted operations are indicated:

In some systems, users may be divided into separate categories. For all users of the same category, uniform access rights are defined, for example, in the UNIX system, all users are divided into three categories: the owner of the file, members of its group, and everyone else.