Combined use of symmetric and asymmetric ciphers. What is HTTP

This part addresses the following issues:

  • Types of ciphers
  • Substitution ciphers
  • Permutation ciphers
  • Encryption methods
  • Symmetric and asymmetric algorithms
  • Symmetric cryptography
  • Asymmetric cryptography
  • Block and stream ciphers
  • Initialization vectors
  • Hybrid encryption methods
Symmetric ciphers are divided into two main types: substitution and permutation. Substitution ciphers replace bits, symbols, or blocks with other bits, symbols, or blocks. Permutation ciphers do not change the source text, instead they move original meanings within the source text - they rearrange bits, characters, or blocks of characters to hide the original meaning.

Substitution ciphers use a key that specifies how the substitution should be performed. IN Caesar's cipher each character was replaced by the character three positions further down from it in the alphabet. The algorithm was the alphabet, and the key was the instruction “shift by three characters.”

Substitution is used by modern symmetric algorithms, but it is difficult to compare with such a simple method as the Caesar cipher. However, the Caesar cipher is a simple and clear example of the concept of how a substitution cipher works.

In a permutation cipher, the values ​​are scrambled or put in a different order. The key specifies the position to which the value should be moved, as shown in Figure 6-6.

Figure 6-6. Permutation cipher


This is the simplest example of a permutation cipher, it only shows how to perform the permutation. If complex mathematical functions are involved, the permutation can become quite difficult to crack. Modern symmetric algorithms simultaneously use long sequences of complex substitutions and permutations of symbols of the encrypted message. The algorithm contains possible methods for substitution and permutation processes (represented in mathematical formulas). The key is the instructions for the algorithm, specifying exactly how must processing occurs and in what sequence. To understand the relationship between the algorithm and the key, take a look at Figure 6-7. Figuratively speaking, the algorithm creates various boxes, each of which has its own (different from the others) set of mathematical formulas indicating the substitution and permutation steps that must be performed on the bits that fall into this box. To encrypt a message, the value of each bit must pass through different boxes. However, if each of our messages goes through the same set of mailboxes in the same sequence, an attacker can easily reverse engineer this process, crack the cipher and obtain the plaintext of our message.

Figure 6-7. Relationship between key and algorithm


To thwart an attacker, a key is used, which is a set of values ​​that indicate which boxes should be used, in what sequence and with what values. So, if message A is encrypted with key 1, the key requires the message to go through boxes 1, 6, 4 and 5. When we need to encrypt message B, we use key 2, which requires the message to go through boxes 8, 3, 2 and 9. The key adds randomness and secrecy to the encryption process.

Simple substitution and permutation ciphers are vulnerable to attacks that perform frequency analysis (frequency analysis). In every language, some words and patterns are used more often than others. For example, in English text the letter “e” is usually used more often. When performing frequency analysis of a message, the attacker looks for the most frequently repeated patterns of 8 bits (which make up the character). If he finds, for example, 12 eight-bit patterns in a short message, he can conclude that it is most likely the letter "e" - the most commonly used letter in the language. Now the attacker can replace these bits with the letter "e". This will give him a leg up in a process that will allow him to reverse engineer and restore the original message.

Modern symmetric algorithms use substitution and permutation techniques in the encryption process, but they use (should use) mathematics that is too complex to allow such a simple frequency analysis attack to be successful.

Key generation functions. To generate complex keys, a master key is usually first created, from which symmetric keys are then generated. For example, if an application is responsible for generating a session key for each entity that accesses it, it should not simply hand out copies of the same key. Different entities require different symmetric keys on each connection to minimize the length of time they are used. Even if an attacker intercepts the traffic and cracks the key, he will only be able to view the transmitted information within the corresponding session. The new session will use a different key. If two or more keys are generated from a master key, they are called subkeys (subkey).

Key generation functions (KDF – key derivation function) are used to generate keys consisting of random values. Different values ​​can be used independently or together as random key material. Algorithms are created that use specific hashes, passwords, and/or salts that are passed through mathematical functions specified by the algorithm many times. The more times this key material passes through these functions, the greater the level of confidence and security the cryptosystem as a whole can provide.


NOTE. Remember that the algorithm remains static. The randomness of cryptography processes is ensured mainly by the key material.


Although there are many parts to the encryption process, there are two main parts: algorithms and keys. As stated earlier, the algorithms used in computer systems are complex mathematical formulas that dictate the rules for converting plaintext into ciphertext. The key is a string of random bits that is used by the algorithm to add randomness to the encryption process. For two entities to communicate using encryption, they must use the same algorithm and, in some cases, the same key. In some encryption technologies, the recipient and sender use the same key, while in other technologies they must use different but related keys to encrypt and decrypt information. The following sections explain the differences between these two types of encryption methods.

Cryptographic algorithms are divided into symmetric algorithms which use symmetric keys (also called secret keys), and asymmetric algorithms , which use asymmetric keys (also called public keys and private keys).

In a cryptosystem that uses symmetric cryptography, the sender and recipient use two copies of the same key to encrypt and decrypt information, as shown in Figure 6-8. Thus, the key has dual functionality and is used in both the encryption and decryption process. Symmetric keys are also called secret keys, because This type of encryption requires each user to keep the key secret and protect it appropriately. If an attacker obtains this key, he can use it to decrypt any message encrypted on it that is intercepted.

Figure 6-8. When using a symmetric algorithm, the sender and recipient use the same key to encrypt and decrypt data


Each pair of users requires two copies of the same key to exchange data securely using symmetric cryptography. For example, if Dan and Irina need to exchange data, they both need to obtain a copy of the same key. If Dan also wants to interact with Norm and Dave using symmetric cryptography, he needs to have three separate keys - one for each friend. This isn't a big problem until Dan needs to interact with hundreds of other people over several months and keep a history of his conversations. After all, this will require the use of the appropriate key for correspondence with each specific recipient. In this case, it can become a daunting task. If ten people needed to communicate securely with each other using symmetric cryptography, they would need 45 keys. If a hundred people need to interact, they will need 4950 keys. The formula for calculating the required number of symmetric keys is as follows:

Number of keys = N(N – 1)/2, where N is the number of subscribers


When using symmetric algorithms, the sender and recipient use the same key for the processes of encrypting and decrypting information. The security of such algorithms depends entirely on how well users protect the keys. In this case, security depends entirely on the staff, who must keep their keys secret. If a key is compromised, all messages encrypted with that key can be decrypted and read by an attacker. In fact, it becomes even more complex because the keys need to be securely distributed and updated when necessary. If Den needs to interact with Norm for the first time, Den must decide how to safely give Norm the key. If he does this in an insecure manner, such as by simply sending the key via email, that key can easily be intercepted and used by an attacker. Therefore, Dan must give the key to Norm in an unconventional way. For example, Dan could write the key onto a flash drive and place it on Norm's desk or send it to Norm via a trusted courier. The process of distributing symmetric keys can be a very complex and cumbersome task.

Because both users use the same key to encrypt and decrypt messages, symmetric cryptosystems can provide confidentiality but not authentication or non-repudiation. Such a cryptographic algorithm will not allow you to prove who really sent the message, because both users use the same key.

But if symmetric cryptosystems have so many shortcomings and problems, why are they used almost everywhere? Because they provide very high processing speed and are very difficult to hack. Symmetric algorithms are much faster than asymmetric ones. They can encrypt and decrypt large amounts of data relatively quickly. In addition, data encrypted with a symmetric algorithm using a long key is very difficult to crack.

The following list describes the strengths and weaknesses of symmetric key cryptosystems:

Strengths:

  • Much faster than asymmetric systems
  • Difficult to hack when using a long key
Weak sides:
  • Requires a secure key transfer mechanism
  • Each pair of users needs a unique key; As the number of users increases, the increasing number of keys can make managing them simply unrealistic.
  • Provides confidentiality but does not provide authentication or non-repudiation
Below are some examples of symmetric algorithms, which will be discussed in detail later in the Block and Stream Ciphers section.
  • RC4, RC5 and RC6
Related links:
  • Security in Open Systems, Node 208, “Symmetric Key Cryptography,” by Paul Markovitz, NIST Special Publication 800-7 (July 1994)
In symmetric key cryptography, the same secret key is used for encryption and decryption, whereas in public key systems, different keys are used for these purposes. asymmetrical ) keys. In this case, two different asymmetric keys are related mathematically. If a message is encrypted with one key, another key is required to decrypt it.

In public key systems, a pair of keys is created, one of which is private and the other is public. Public key(public key) can be known to everyone, and private key(private key) should only be known by its owner. Often, public keys are stored in directories and databases of email addresses that are publicly available to anyone who wants to use those keys to encrypt and decrypt data when interacting with individuals. Figure 6-9 illustrates the use of differing asymmetric keys.
The public and private keys of an asymmetric cryptosystem are mathematically related, but if someone has another person's public key, it is impossible to know their corresponding private key. Thus, if an attacker obtains a copy of Bob's public key, this does not mean that by some mathematical magic he will be able to obtain Bob's corresponding private key. However, if someone gets hold of Bob's private key, there will be a big problem. Therefore, no one other than the owner should have access to the private key.

Figure 6-9. Asymmetric cryptosystem


If Bob encrypted the data with his private key, the recipient would need Bob's public key to decrypt it. The recipient can not only decrypt Bob's message, but also respond to Bob with an encrypted message. To do this, he needs to encrypt his answer with Bob's public key, then Bob can decrypt this answer with his private key. When using an asymmetric algorithm, it is impossible to encrypt and decrypt a message with the same key; these keys, although mathematically related, are not the same (unlike symmetric algorithms). Bob can encrypt the data with his private key, then the recipient can decrypt it with Bob's public key. By decrypting a message using Bob's public key, the recipient can be sure that the message really came from Bob, because the message can only be decrypted using Bob's public key if it was encrypted using Bob's corresponding private key. This provides authentication capabilities because Bob is (presumably) the only one who has this private key. If the recipient wants to be sure that the only one who can read his response will be Bob, he must encrypt his message to Bob with Bob's public key. Then only Bob will be able to decrypt this message, since only he has the necessary private key to do so.

Additionally, the recipient may choose to encrypt the data with their private key rather than Bob's public key. What will this give him? Authentication. Bob will know that the message came from him and could not have come from anyone else. If he encrypts the data with Bob's public key, this will not provide authentication because anyone can obtain Bob's public key. If he uses his private key to encrypt the data, then Bob can be sure that the message came from him. Symmetric keys do not provide authentication because both parties use the same key, which cannot guarantee that the message comes from a specific person.

If the sender is more concerned about the confidentiality of the information being transmitted, he should encrypt his message with the recipient's public key. It's called secure message format (secure message format), since only the person who has the corresponding private key will be able to decrypt this message.

If authentication is more important to the sender, he should encrypt the transmitted data with his private key. This will allow the recipient to be sure that the person who encrypted the data is the one who has the corresponding private key. If the sender encrypts the data with the recipient's public key, this does not provide authentication because the public key is available to everyone.

Encrypting data using the sender's private key is called open message format (open message format), because anyone can decrypt this data using the sender's public public key. Confidentiality is not ensured.

Both private and public keys can be used to encrypt and decrypt data. Don't think that the public key is only needed for encryption, and the private key is only needed for decryption. It should be understood that if data is encrypted with a private key, it cannot be decrypted with it. Data encrypted with a private key can be decrypted with its corresponding public key. And vice versa.

The asymmetric algorithm is slower than the symmetric algorithm because symmetric algorithms perform relatively simple mathematical functions on the bits in the encryption and decryption processes. They replace and shuffle (move) bits around, which is not very complicated and doesn't use much CPU. The reason they are resistant to hacking is that they perform these functions many times. Thus, in symmetric algorithms, a set of bits goes through a longer series of substitutions and permutations.

Asymmetric algorithms are slower than symmetric algorithms because they use much more complex mathematics to perform their functions, which requires more CPU time. However, asymmetric algorithms can provide authentication and non-repudiation depending on the algorithm used. Additionally, asymmetric systems allow for a simpler and more manageable key distribution process than symmetric systems and do not have the scalability issues that symmetric systems have. The reason for these differences is that with asymmetric systems, you can send your public key to all the people you want to interact with, rather than using a separate private key for each of them. Next, in the Hybrid Encryption Methods section of this Domain, we'll look at how these two systems can be used together to achieve the best results.

NOTE. Public key cryptography is asymmetric cryptography. These terms are used interchangeably.

The following are the strengths and weaknesses of asymmetric key algorithms:

Strengths

  • Better key distribution process than symmetric systems
  • Better scalability than symmetric systems
  • Can provide authentication and non-repudiation
Weak sides
  • Works much slower than symmetric systems
  • Perform complex mathematical transformations
Below are examples of algorithms with asymmetric keys.
  • Elliptic curve cryptosystem (ECC)
  • Diffie-Hellman algorithm
  • El Gamal
  • Digital Signature Algorithm (DSA – Digital Signature Algorithm)
  • Knapsack
We will consider these algorithms further in this Domain, in the section “Types of asymmetric systems”.

Table 6-1 provides a brief summary of the main differences between symmetric and asymmetric systems.

Table 6-1. Differences between symmetrical and asymmetrical systems


NOTE. Digital signatures will be discussed later in the Digital Signatures section.
Related links:
  • Security in Open Systems, Node 210, “Asymmetric Key Cryptography,” by Paul Markovitz, NIST Special Publication 800-7 (July 1994)
  • Frequently Asked Questions About Today’s Cryptography, Version 4.1, Section 2.1.4.5, “What Is Output Feedback Mode?” by RSA Laboratories
There are two main types of symmetric algorithms: block ciphers, which operate on blocks of bits, and stream ciphers, which operate on one bit at a time.

If you use to encrypt and decrypt data block cipher , the message is divided into blocks of bits. These blocks are then passed on to mathematical functions for processing, one block at a time. Imagine you need to encrypt a message to your mom using a block cipher that works in 64-bit blocks. Your message is 640 bits long, so it is divided into 10 separate 64-bit blocks. Each block is sequentially transmitted to the input of the mathematical function. This process continues until each block is converted into ciphertext. After that, you send an encrypted message to your mom. It uses the same block cipher and the same key. These 10 ciphertext blocks are fed into the algorithm in reverse order until the original plaintext is obtained.


To ensure the strength of a cipher, it must make sufficient use of two basic methods: confusion and diffusion. Mixing is usually done using substitution, whereas diffusion - using rearrangement. For a cipher to be truly strong, it must use both of these methods to make reverse engineering virtually impossible. The level of mixing and dispersion is indicated by the randomness of the key value and the complexity of the mathematical functions used.

In algorithms, scattering can occur both at the level of individual bits in blocks and at the level of the blocks themselves. Shuffling is performed using complex substitution functions to prevent an attacker from understanding how the original values ​​were replaced and obtaining the original plaintext. Imagine that I have 500 wooden blocks, each with a letter on it. I line them up to make a message (plain text) out of them. I then replace 300 of these blocks with blocks from another set (shuffle by substitution). Then I rearrange all these blocks (dispersal by shuffling) and leave this pile. In order for you to be able to reconstruct my original sentence, you need to replace the blocks with the correct ones and place them in the correct sequence. Good luck!

Shuffling is performed to create a relationship between the key and the resulting ciphertext. This relationship must be as complex as possible so that it is impossible to open the key based on analysis of the ciphertext. Each value in the ciphertext must depend on several parts of the key, but to an observer this relationship between the key values ​​and the ciphertext values ​​must appear completely random.

Scattering, on the other hand, means that one bit of plaintext affects multiple bits of ciphertext. Replacing a value in the plaintext should result in replacing multiple values ​​in the ciphertext, not just one. In fact, in a truly strong block cipher, when one bit in the plaintext is changed, about 50% of the bits in the ciphertext should change. Those. If you change just one bit in the plaintext, about half of the ciphertext will change.

Block ciphers use both shuffling and scattering in their operating methods. Figure 6-10 shows a conceptual example of a simple block cipher. It is given four blocks of four bits each for processing. The block algorithm under consideration has two levels of four-bit replacement boxes, called S-boxes. Each S-box contains lookup tables used by the algorithm as instructions for encrypting the bits.

Figure 6-10. The message is divided into blocks of bits on which the substitution and scattering functions are performed


The key specifies (see Figure 6-10) which S-boxes should be used in the process of shuffling the original message from readable plaintext to unreadable ciphertext. Each S-box contains different substitution and permutation methods that can be performed on each block. This is a very simple example. In reality, most block ciphers work with blocks of 32, 64, or 128 bits and can use many more S-boxes.

As stated earlier, block ciphers perform mathematical functions on blocks of bits. Unlike them, stream ciphers (stream cipher) do not divide the message into blocks. They process the message as a stream of bits and perform mathematical functions on each bit separately.

When using a stream cipher, the encryption process converts each bit of plaintext into a bit of ciphertext. Stream ciphers use a keystream generator that produces a stream of bits that are XORed with plaintext bits to produce a ciphertext. This is shown in Figure 6-11.

Figure 6-11. In a stream cipher, the bits generated by the keystream generator are XORed with the plaintext bits of the message

NOTE. This process is very similar to using encryption one-time pads described earlier. The individual bits in the one-time pad are used to encrypt individual bits of the message using the XOR operation, and in the stream algorithm, the individual bits are created by a key stream generator also used to encrypt the bits of the message using the XOR operation.

If a cryptosystem depends only on a symmetric stream algorithm, an attacker can obtain a copy of the plaintext and the resulting ciphertext, XOR them together, and end up with a used key stream that he can later use to decrypt other messages. Therefore, smart people decided to insert the key into this stream.

In block ciphers, the key determines which functions are applied to the plaintext and in what order. The key ensures the randomness of the encryption process. As stated earlier, most encryption algorithms are open source, so people know how they work. The only secret is the key. In stream ciphers, randomness is also achieved through the key, making the stream of bits with which the plaintext is combined as random as possible. This concept is illustrated in Figure 6-12. As you can see in this figure, both the sender and the recipient must have the same key to generate the same key stream in order to be able to correctly encrypt and decrypt information.

Figure 6-12. The sender and recipient must have the same key to generate the same key stream



Initialization vectors (IV) are random values ​​that are used by the algorithm to ensure that there are no patterns in the encryption process. They are shared with keys and do not need to be encrypted when sent to the recipient. If no initialization vector is used, two identical plaintexts encrypted with the same key will result in the same ciphertext. Such a template will significantly simplify the attacker’s task of breaking the encryption method and revealing the key. If your message has a repeated part (a phrase or a word), you need to make sure that when you encrypt each repeated part of the plaintext of the message, a different ciphertext is created, i.e. no template will be created. It is to ensure greater randomness in the encryption process that the initialization vector is used together with the key.

Strong and efficient stream ciphers have the following characteristics:

  • Long periods of non-repeating patterns in key stream values. The bits generated by the key stream must be random.
  • Statistically unpredictable key stream. The bits produced at the output of the keystream generator should not be predictable.
  • The key stream has no linear relationship to the key. If someone has obtained the key stream values, this should not result in them receiving the key value.
  • Statistically uniform key flow (approximately equal number of zeros and ones). The key stream should not be dominated by zeros or ones.
Stream ciphers require randomness and encrypt one bit at a time. This requires more CPU resources than a block cipher, so stream ciphers are more suitable for implementation in hardware. And block ciphers, since they do not require as much processor resources, are easier to implement at the software level.
NOTE. Of course, there are both block ciphers, implemented at the hardware level, and stream ciphers, operating at the software level. The above statement is simply a "best practice" design and implementation guideline.


Stream ciphers and One-time pads. Stream ciphers provide the same type of security as one-time pads, so they work in a similar way. Stream ciphers cannot actually provide the same level of security as one-time pads because they are implemented in the form of software and automated tools. However, this makes stream ciphers more practical.


Previously, we looked at symmetric and asymmetric algorithms and noted that symmetric algorithms are fast but have some disadvantages (poor scalability, complex key management, providing only confidentiality), while asymmetric algorithms do not have these disadvantages, but they are very slow. Now let's look at hybrid systems that use both symmetric and asymmetric encryption methods.

Combined use of asymmetric and symmetric algorithms


Public key cryptography uses two keys (public and private) generated by an asymmetric algorithm, it is used to protect encryption keys and distribute them. The secret key is generated by a symmetric algorithm and is used for the main encryption process. This is the hybrid use of two different algorithms: symmetric and asymmetric. Each algorithm has its own advantages and disadvantages, and using them together allows you to get the best from each of them.

In a hybrid approach, these two technologies complement each other, each performing its own functions. The symmetric algorithm produces the keys used to encrypt the bulk of the data, while the asymmetric algorithm produces the keys used to automatically distribute symmetric keys.

The symmetric key is used to encrypt the messages you send. When your friend receives a message you encrypted, he needs to decrypt it, which requires the symmetric key on which your message is encrypted. But you don't want to send this key in an insecure manner because... the message can be intercepted and the unprotected key can be extracted from it by an attacker for later use to decrypt and read your messages. You should not use a symmetric key to encrypt messages unless it is properly protected. To secure a symmetric key, an asymmetric algorithm can be used to encrypt it (see Figure 6-13). But why would we use a symmetric key to encrypt messages and an asymmetric key to encrypt a symmetric key? As mentioned earlier, the asymmetric algorithm is slow because it uses more complex mathematics. And since your message will most likely be longer than the key, it makes more sense to use a faster (symmetric) algorithm to encrypt it, and a slower (asymmetric) one that provides additional security services is suitable for encrypting the key.

Figure 6-13. In a hybrid system, the asymmetric key is used to encrypt the symmetric key and the symmetric key is used to encrypt messages


How does this work in reality? Let's say Bill sends Paul a message and wants only Paul to be able to read it. Bill encrypts the message with the secret key, now it has a ciphertext and a symmetric key. The key must be protected, so Bill encrypts the symmetric key with an asymmetric key. Asymmetric algorithms use a private and public key, so Bill encrypts the symmetric key with Paul's public key. Bill now has the message ciphertext and the symmetric key ciphertext. Why did Bill encrypt the symmetric key with Paul's public key and not with his private key? If Bill encrypted it with his own private key, anyone could decrypt it with Bill's public key and get a symmetric key. However, Bill does not want anyone with his public key to be able to read his messages to Paul. Bill wants only Paul to have this opportunity. So Bill encrypted the symmetric key with Paul's public key. If Paul protected his private key well, only he would be able to read Bill's message.

Paul receives Bill's message and uses his private key to decrypt the symmetric key. Paul then uses the symmetric key to decrypt the message. Now Paul can read an important and confidential message from Bill.

When we say that Bill uses a key to encrypt a message and Paul uses the same key to decrypt it, this does not mean that they do all these operations manually. Modern software does all this for us without requiring us to have special knowledge to use it.

Everything here is quite simple, you need to remember the following aspects:

  • An asymmetric algorithm performs encryption and decryption using private and public keys that are mathematically related.
  • The symmetric algorithm performs encryption and decryption using a shared secret key.
  • A symmetric (secret) key is used to encrypt real messages.
  • The public key is used to encrypt the symmetric key for secure transmission.
  • A secret key is the same as a symmetric key.
  • An asymmetric key can be private or public.
So, when using a hybrid system, the symmetric algorithm creates a secret key used to encrypt data or messages, and the asymmetric key encrypts the secret key.

Session key (session key) is a symmetric key used to encrypt messages exchanged between two users. A session key is no different from the symmetric key described earlier, but it is only valid for one communication session between users.

If Tanya has a symmetric key that she constantly uses to encrypt messages between her and Lance, that symmetric key does not need to be regenerated or changed. They simply use the same key every time they interact using encryption. However, prolonged reuse of the same key increases the likelihood of its interception and compromise of secure communications. To avoid this, you should generate a new symmetric key each time Tanya and Lance need to communicate, and use it only for one communication session, and then destroy it (see Figure 6-14). Even if they need to interact again in just an hour, a new session key will be generated.

Figure 6-14. A session key is generated for each user interaction session and is valid only within that session


Digital envelopes. When people are first introduced to cryptography, using symmetric and asymmetric algorithms together can cause confusion. However, these concepts are very important to understand because they are truly the core, fundamental concepts of cryptography. This process is used not only in the email client or in several products, it determines the order in which data and symmetric keys are processed when it is transmitted.
The combined use of these two technologies is called a hybrid approach, but it also has a more general name - digital envelope (digital envelope).




The session key provides a higher level of protection compared to a static symmetric key, because it is valid only for one communication session between two computers. If an attacker is able to intercept the session key, he will be able to use it to gain unauthorized access to the transmitted information for only a short period of time.

If two computers need to communicate using encryption, they must first go through a “handshake” process in which they agree on an encryption algorithm that will be used to transmit a session key to further encrypt data as the computers communicate. Essentially, two computers establish a virtual connection with each other, called a session. After a session ends, each computer destroys any data structures created for that session, frees resources, and, among other things, destroys the used session key. The operating system and applications perform these things in the background and the user does not need to worry about it. However, the security professional must understand the differences between key types and the issues associated with them.


NOTE. Private and symmetric keys must not be stored and/or transmitted in plaintext. Although this seems obvious, many software products have already been compromised for this very reason.

Wireless security issues. We looked at various 802.11 standards and the WEP protocol in Domain 05. Among the extensive list of problems with WEP, there is a problem related to data encryption. If only WEP is used to encrypt wireless traffic, then most implementations use only one statistical symmetric key to encrypt packets. One of the changes and advantages of the 802.11i standard is that it ensures that each packet is encrypted with a unique session key.

Hello!
Let's look at what symmetric and asymmetric cryptography are - why they are called that, what they are used for, and how they differ.

To be precise, it is more correct to say symmetric and asymmetric encryption algorithms.

Cryptography (crypto - hide, hide), as the science of hiding what is written, the science of hiding information.

Most of the encryption algorithms used are open, that is, the description of the algorithm is available to everyone. The encryption key is secret, without which it is impossible to encrypt, much less decrypt, information.

Symmetric encryption algorithms are algorithms that use the same key for encryption and decryption. That is, if we want to exchange encrypted messages with a friend, we must first agree on what encryption key we will use. That is, we will have one encryption key for two.

In symmetric algorithms, the encryption key is a weak point, and more care must be taken to ensure that this key is not found out by others.

Asymmetric encryption algorithms are algorithms in which different but mathematically related keys are used for encryption and decryption. Such related keys are called a cryptopair. One of them is closed (private), the second is open (public). At the same time, information encrypted with a public key can only be decrypted using a private key, and vice versa, what is encrypted with a private key can only be decrypted using a public key.
You store the private key in a safe place, and no one knows it except you, and you distribute a copy of the public key to everyone. Thus, if someone wants to exchange encrypted messages with you, they will encrypt the message using your public key, which is available to everyone, and this message can only be decrypted using your private key.

Now about why and why symmetric and asymmetric algorithms are used:
The table shows that to encrypt a large amount of information, for stream encryption (for example, VPN with encryption), fast and undemanding symmetric algorithms are used.

But if we need to maximally secure a small amount of information, while we are not constrained by time and computing resources, then we can use asymmetric cryptography.

In life, of course, everything is a little different than in theory.
In life, a combination of symmetric and asymmetric algorithms is used.

For example, VPN with encryption:

    The first step uses asymmetric key algorithms to obtain a symmetric encryption key ().
At the second step, the streaming data is encrypted using symmetric algorithms with the key generated in the first step.

Thus, it is common practice to use a symmetric key to quickly encrypt large amounts of data. In this case, to exchange and transmit a symmetric key, asymmetric encryption algorithms are used.

Classical or single-key cryptography relies on the use of symmetric encryption algorithms, in which encryption and decryption differ only in the order of execution and the direction of some steps. These algorithms use the same secret element (the key), and the second action (decryption) is a simple reversal of the first (encryption). Therefore, usually each of the exchange participants can both encrypt and decrypt the message. The schematic structure of such a system is shown in Fig. 2.1.


Rice. 2.1.

On the sending side there is a message source and a key source. The key source selects a specific key K among all possible keys of a given system. This key K is transmitted in some way to the receiving party, and it is assumed that it cannot be intercepted, for example, the key is transmitted by a special courier (therefore symmetric encryption also called encryption with private key). The message source generates some message M, which is then encrypted using the selected key. As a result of the encryption procedure, an encrypted message E (also called a cryptogram) is obtained. Next, the cryptogram E is transmitted over the communication channel. Since the communication channel is open, unprotected, for example, a radio channel or a computer network, the transmitted message can be intercepted by the enemy. On the receiving side, the cryptogram E is decrypted using the key and the original message M is received.

If M is a message, K is a key, and E is an encrypted message, then we can write

that is, the encrypted message E is some function of the original message M and the key K. The encryption method or algorithm used in a cryptographic system determines the function f in the above formula.

Due to the great redundancy of natural languages, it is extremely difficult to make a meaningful change directly into an encrypted message, so classical cryptography also provides protection against the imposition of false data. If natural redundancy is not enough to reliably protect a message from modification, redundancy can be artificially increased by adding a special control combination to the message, called imitation insertion.

There are different methods of encryption with a private key (Fig. 2.2. In practice, permutation and substitution algorithms, as well as combined methods, are often used.


Rice. 2.2.

In permutation methods, the characters in the source text are swapped with each other according to a certain rule. In replacement (or substitution) methods, plaintext characters are replaced with some ciphertext equivalent. To improve the security of encryption, text encrypted using one method can be encrypted again using another method. In this case, a combination or composition cipher is obtained. Block or stream symmetric ciphers currently used in practice are also classified as combined ciphers, since they use several operations to encrypt a message. “Principles of constructing block ciphers with a private key”, “DES and AES encryption algorithms”, “Algorithm for cryptographic data conversion GOST 28147-89”, and this lecture discusses substitution and permutation ciphers used by humans since ancient times. We should become familiar with these ciphers because substitution and permutation are used as compound operations in modern block ciphers.

We've released a new book, Social Media Content Marketing: How to Get Inside Your Followers' Heads and Make Them Fall in Love with Your Brand.

Subscribe

HTTP is what allows data to be transferred. Initially, it was created for sending and receiving documents containing links inside to make the transition to third-party resources.

The abbreviation reads “HyperText Transfer Protocol,” which translated means “transfer protocol.” HTTP belongs to the application layer group based on the specifics used by OSI.

To better understand what HTTP means, let's look at a simple analogy. Let's imagine that you are communicating with a foreigner on a social network. He sends you a message in English, you receive it. But you cannot understand the content because you do not speak the language well. To decipher the message, use a dictionary. Having understood the essence, you answer the foreigner in Russian and send the answer. The foreigner receives the answer and, with the help of a translator, deciphers the message. To simplify the whole mechanism, the Internet protocols HTTP perform the function of a translator. With their help, the browser can translate the encrypted content of web pages and display their content.

What is HTTP for?

The HTTP protocol is used to exchange information using a client-server model. The client composes and transmits a request to the server, then the server processes and analyzes it, after which a response is created and sent to the user. At the end of this process, the client issues a new command, and everything is repeated.

Thus, the HTTP protocol allows you to exchange information between various user applications and special web servers, as well as connect to web resources (usually browsers). Today, the described protocol ensures the operation of the entire network. The HTTP data transfer protocol is also used to transfer information over other lower-level protocols, for example, WebDAV or SOAP. In this case, the protocol is a means of transportation. Many programs also rely on HTTP as the primary tool for exchanging information. Data is presented in various formats, for example, JSON or XML.

HTTP is a protocol for exchanging information over an IP/TCP connection. Typically, the server uses TCP port 80 for this purpose. If the port is not specified, the client software will use TCP type port 80 by default. In some cases, other ports may be used.

The HTTP protocol uses a symmetric encryption scheme and uses symmetric cryptosystems. Symmetric cryptosystems involve the use of the same key to encrypt and decrypt information.

What is the difference between HTTP and HTTPS

The difference can be detected even from the decoding of abbreviations. HTTPS stands for Hypertext Transfer Protocol Security. Thus, HTTP is an independent protocol, and HTTPS is an extension to protect it. HTTP transmits information unprotected, while HTTPS provides cryptographic protection. This is especially true for resources with responsible authorization. These could be social networks or payment system sites.

What are the dangers of transmitting unprotected data? An interceptor program can transfer them to attackers at any time. HTTPS has a complex technical organization, which allows you to reliably protect information and eliminate the possibility of unauthorized access to it. The difference lies in the ports. HTTPS typically works on port 443.

Thus, HTTP is used for data transfer, and HTTPS allows for secure data transfer using encryption and authorization on resources with a high level of security.

Additional functionality

HTTP is rich in functionality and is compatible with various extensions. The 1.1 specification used today allows the Upgrade header to be used to switch and work through other protocols when exchanging data. To do this, the user must send a request to the server with this header. If the server needs to switch to a specific exchange using a different protocol, it returns a request to the client, which displays the status “426 Upgrade Required”.

This feature is especially relevant for exchanging information via WebSocket (has the RFC 6455 specification, allowing you to exchange data at any time, without unnecessary HTTP requests). To migrate to WebSocket, one user sends a request with the Upgrade header and the value “websocket”. Next, the server responds with “101 Switching Protocols.” After this moment, information transfer via WebSocket begins.

TERENIN Alexey Alekseevich, Candidate of Technical Sciences

Cryptographic algorithms used to ensure information security when interacting on the INTERNET

A brief overview of the most common encryption algorithms today, their description, as well as problems encountered during their implementation and significant aspects in practical use are presented.

Protecting information using cryptographic transformation methods involves changing its components (words, letters, syllables, numbers) using special algorithms or hardware solutions and key codes, that is, bringing it to an implicit form. To familiarize yourself with encrypted information, the reverse process is used: decoding (decryption). The use of cryptography is one of the common methods that significantly increases the security of data transmission in computer networks, data stored in remote memory devices, as well as when exchanging information between remote objects.

For conversion (encryption), some algorithm or device is usually used that implements a given algorithm, which may be known to a wide range of people. The encryption process is controlled using a periodically changing key code, ensuring an original representation of information each time when using the same algorithm or device. Knowing the key allows you to simply and reliably decrypt the text. However, without knowing the key, this procedure can be practically impossible even with a known encryption algorithm.

Even simple transformation of information is a very effective means of hiding its meaning from most unskilled violators.

Brief historical overview of the development of encryption

The origins of cryptography go back to Egyptian hieroglyphs. Since ancient times, when Egypt and Persia flourished, messengers were used for the most important state and military missions, carrying the text of the message either on parchment or in their heads to convey it in words, the latter method being more preferable. Even then, more or less successful ways to protect transmitted information from attacks by interceptors appeared. Let us cite a well-known legend from the Ancient world. A certain king, having been captured, made a tattoo on the head of a slave - a message to the allies. When the hair grew back, the slave moved to the recipients of the message and the king was freed. The prototype of modern steganography.

The ancient Greeks used round sticks of the same diameter, on which strips of parchment were wound. The inscription was made longitudinally along the length of the stick. It was possible to fold the text into a readable text only if you had a stick of the same diameter.

In Ancient Rome, the science of cryptography, translated from Latin as secret writing, was already clearly beginning to take shape. The Caesar cipher appears when each letter is replaced by a letter three away in the alphabet.

In medieval intriguing Europe and Central Asia, there was a rapid development of cryptography and cryptanalysis - methods of breaking cipher texts. The first systematic work on cryptography is considered to be the book of the architect Leon Battisti Alberti (1404 - 1472). One of the first cryptanalysts was François Viète (1540 - 1603), at the court of King Henry IV of France. At the same time, advisors from the Adgenti family, who can also be called cryptanalysts, served at the court of the Pope. The entire period until the middle of the 17th century. full of works on cryptography and cryptanalysis.

In the 19th and first half of the 20th centuries. For secret diplomatic correspondence, many countries, including Russia, use encryption methods, the keys for which were compiled from excerpts of certain texts of ordinary books (cipher books).

Since the beginning of the twentieth century. - from the First World War - special encryption machines began to be used.

The German Enigma machine, the code of which was revealed by the British, is widely known. In order not to give away the fact of the disclosure of the German code, the British government made great sacrifices among the civilian population, without warning the residents of two large cities about the impending bombing. But this then helped to gain a significant advantage in the northern naval battles with Germany, when the invincible German submarines and cruisers were destroyed.

After World War II, computers took over cryptography. For a long time, this was the domain of the most powerful supercomputers of their time.

Publications on this topic were strictly classified and the use of scientific research in this area was a domestic prerogative. Only Von Neumann's textbook work of the 40s was publicly available, describing, in addition to the principles of constructing computer systems, some other possible malicious methods of influence for disrupting the “legal” computing process, as well as the classic work of Shannon, which laid the foundations of computer cryptography.

Since the 70s open publications appear: Haffey-Dilman in 1976. In 1970, there was a secret invention by James Ellis (Great Britain) in the field of cryptography. The most famous asymmetric cryptography algorithm is RSA, developed by Ronald Rivest, Eddie Shamir and Len Edleman in 1977. The RSA algorithm is of great importance because. can be used both for public key encryption and for creating an electronic digital signature.

This was a revolutionary period in the development of cryptographic science. New methods for secretly distributing key information in open computing systems emerged, and asymmetric cryptography was born.

But even after this, for a long time, the prerogative of using cryptography in data protection was with government services and large corporations. The computing technology of that time, with the power necessary for cryptographic transformations, was very expensive.

At that time, the main state standards of cryptographic algorithms appeared (USA and some European countries), the use of which was prescribed when working with information classified as state secrets.

The veil of secrecy around these technologies even led to the fact that in the United States cryptographic algorithms were equated to weapons, and a ban was introduced on the export of encryption hardware and software. Then export restrictions were introduced on the length of the key used in encryption algorithms outside the United States, which allowed American intelligence agencies to decrypt messages using available computing power without knowing the shortened key. On March 1, 2001, export restrictions were lifted. Due to the events that occurred on September 11 of the same year, there has been a tightening of government control. The US government is considering options to reintroduce export controls on encryption tools.

Let's go back to the 70s. Since that time, neither scientific research nor the development of computing tools has stopped. The computing power of supercomputers increases several times every few years. A personal computer appears. The power of a personal computer is approximately equal to the power of a supercomputer ten years ago. Now personal computers have become even more powerful.

Since the 80s ordinary users have the opportunity to use cryptographic tools on their computers, which is vehemently prevented by government agencies; it becomes more difficult to monitor the activities of the country’s citizens, including criminal elements.

The release of Phil Zimmermann's PGP (Pretty Good Privacy) program (version 1.0 was released in 1991) and its provision of open and free use provided great opportunities for ordinary computer users. Phil Zimmermann was even declared an enemy of the state and sentenced to prison.

Constantly increasing computing power forced the use of increasingly complex crypto-transformation algorithms or increasing the length of the keys used in encryption.

Standards for cryptographic algorithms were becoming outdated and becoming unreliable. Information locked with a certain key could no longer be kept confidential for long enough - as long as required by government regulations. For example, storing information in complete secret in encrypted form for 5 years meant that the enemy, possessing the most powerful computing means, constantly searching through possible keys, would most likely not have found the necessary key to decrypt the stored information during this period.

Competitions began to be held to reveal some information encrypted using the algorithm of one of the standards. The winner was awarded a substantial cash prize, as well as worldwide fame in the information community. By uniting ordinary computers in a computer network to work in parallel on solving a given problem, users gathered in groups and selected a key together.

A key length of 48 bits means that 2 48 searches need to be done. Increasing the key length, for example, by only 16 bits, means that it is necessary to enumerate 2 16 times more.

But even this key size made it possible for united groups to solve the problem of breaking the cipher in days and even hours of parallel work. Subsequently, it was necessary to switch to keys that were several times longer than those mentioned. But this was only a temporary measure, and new standards for crypto-transformation algorithms (AES in the USA) were recently adopted.

Currently, many publications devoted to this problem have appeared in the press. Numerous books are published, both translated and by Russian authors. The problem of protecting information from disclosure and modification can be solved by cryptography. The complexity of the mathematical apparatus of modern cryptography exceeds that used to develop nuclear weapons and space systems.

Modern cryptography is divided into symmetric and asymmetric. Symmetric – for stream cipher, block and composite. Asymmetric cryptography is more resource-intensive, and in symmetric cryptography there is a problem of efficient key distribution. Modern secure exchange systems are based on the use of mixed cryptography. At the beginning of the exchange session, the parties send each other secret session keys via asymmetric cryptography, which are then used to symmetrically encrypt the sent data. An asymmetric cryptography system allows keys to be distributed in symmetric encryption systems.

Government and military telecommunications systems use exclusively symmetric encryption (most often using one-time keys). This is due to the fact that the security of public key systems has not been strictly mathematically proven, but the opposite has not been proven either.

Information encryption should not be accepted as a panacea for all information threats. It should be perceived as one of the mandatory information protection measures as part of a comprehensive information security system. The use of encryption should be combined with legislative, organizational and other protective measures.

Symmetric encryption algorithms

Encryption algorithms are designed to solve the problem of ensuring information confidentiality. Currently, cryptographic methods are intensively used to hide information. Since ancient times, encryption has been and remains the most effective form of protection.

Encryption is defined as the reciprocal transformation of unprotected (open) information into an encrypted (closed) form - ciphertext, in which it is not completely accessible to an attacker. Encryption uses keys, the presence of which means the ability to encrypt and/or decrypt information. It is important to note that the encryption method itself does not need to be kept secret, since knowing it alone will not allow you to decrypt the ciphertext.

Modern cryptosystems can be clearly divided according to the method of using keys into cryptosystems with a secret key (symmetric) and with a public key (asymmetric). If the same key is used for encryption and decryption, the cryptosystem is called symmetric.

Symmetric cryptosystems include DES, AES, GOST 28147-89, etc. A new direction in cryptography was the invention of asymmetric public key cryptosystems, such as RSA, DSA or El-Gamal.

In asymmetric cryptosystems, different keys that are practically indeducible from each other are used for encryption and decryption, one of which (the decryption key) is made secret, and the other (the encryption key) is made public. This makes it possible to transmit secret messages over an insecure channel without first transmitting a secret key. It was public key cryptography that broke the vicious circle of symmetric ciphers, when in order to organize the exchange of secret information it was necessary to first distribute secret keys.

Public key cryptosystems will be discussed in detail later, but now let’s return to symmetric cryptosystems (KS).

The most important component of the CS are ciphers or procedures for the inverse transformation of plaintext M into ciphertext M":

M' = E(M),
M = D(M’),

where E is the encryption function and D is the decryption function.

The generally accepted approach in cryptography is to construct a cipher in such a way that its secrecy is determined only by the secrecy of the key K S (Kerkoff's rule). Thus, the cipher must be resistant to cracking, even if the potential cryptanalyst knows the entire encryption algorithm except the value of the key used, and has the full text of the intercepted ciphergram.

Practice has shown that the more well-known an algorithm is, the more people have worked with it, the more proven, and therefore reliable, it becomes. Thus, publicly known algorithms now stand the test of time, but secret government ciphers reveal many errors and shortcomings, since it is impossible to take everything into account.

The generally accepted scheme for constructing symmetric cryptosystems is cyclic permutations and substitutions of bits in a block of fixed length, the algorithm of which is determined by the secret key.


An encryption algorithm is considered strong if, having private data and knowing the secret key, it is impossible to obtain information about open data. It has been strictly proven that it is impossible to construct an absolutely strong cipher, except for the case when the size of the secret key is equal to (or greater than) the size of the encrypted data. This case is difficult to implement in practice, because actually used and available on the market cryptographic protection tools use ciphers for which the task of restoring the plaintext from the closed text is difficult to calculate, that is, it requires so many resources that the attack becomes economically impractical.

Among the symmetric ciphers, the most well-known and frequently used are the following (the block size in bits is denoted by b, the number of cycles is r, and the key length is l):

DES- US government standard (b = 64, r = 16, l = 56). Currently, DES has been proven to be insufficiently robust against brute-force attacks.
Triple DES and DESX(b = 64, r = 16, l = 168;112) - sequential application of the DES algorithm with different keys, which provides significant resistance to hacking.
IDEA- (b = 64, r = 8, l = 128) . Active research into its strength has revealed a number of weak keys in it, but the likelihood of their use is negligible.
RC5- a parameterized cipher with variables block size (b I), number of cycles (r Ј 255) and number of key bits (l Ј 2040). Studies of its strength have shown that at b = 64 it is inaccessible for differential cryptanalysis at r = 12 and for linear cryptanalysis at r = 7.
GOST 28147-89- Russian data encryption standard (b = 64, r = 32, l = 256). Many weak keys have been found for GOST, significantly reducing its effective strength in simple encryption modes. Assessing the cryptographic strength of GOST is also complicated by the fact that the most important part of the algorithm - replacement nodes or S-boxes in the terminology of the DES cipher - is not described in the standard and the laws of its generation remain unknown. At the same time, it has been proven that there is a high probability of obtaining weak replacement nodes that simplify the cryptanalysis of a given cipher.
Blowfish is a 64-bit block cipher developed by Schneier in 1993, implemented through key-dependent permutations and substitutions. All operations are based on XORs and additions on 32-bit words. The key has a variable length (maximum 448 bits) and is used to generate several subkey arrays. The cipher was created specifically for 32-bit machines and is significantly faster than DES.

The US has now adopted a new encryption standard, AES. A competition was held among encryption algorithms, which was won and formed the basis of AES - Rijndael. Rijndael is an iterative block cipher having variable block lengths and varying key lengths. A more detailed description of this algorithm and the results of the competition is given in.

A fairly large number of symmetric algorithms have been developed, published and studied in the world (Table 1), of which only DES and its modification Triple DES have been sufficiently time-tested. The table does not include little-known and poorly studied algorithms, such as Safer, etc.

Table 1. Overview of symmetric encryption methods

Key length, bits

Block size, bits

Key selection costs, MIPS x years

Note

DES Developed in 1977 by IBM for the US government. For 20 years, no way has been found to crack the cipher, other than exhaustive search on average of 25% of all keys, but with modern capabilities it allows you to achieve success
Triple DES Repeating the DES algorithm three times with different keys. The effective key length is 112 bits.
IDEA Developed in 1992 by Lai and Massey. Not hacked to date
GOST 28147-89

no data

Is the State standard in Russia
RC5

10 3 and above

A 40-bit key was cracked by brute force in 1997 in 3.5 hours, a 48-bit key in 313 hours
Blowfish

no data

Developed by Schneier in 1993.
This Feistel cipher was created specifically for 32-bit machines and is significantly faster than DES
AES (Rijndael) Key length and block length can be 128, 192 or 256 bits, independently of each other Proposed by cryptanalysts Joan Daemen and Vincent Rijmen.
The algorithm has no known security weaknesses (according to NIST).

Currently, symmetric algorithms with a key length greater than 100 bits (Triple DES and IDEA, etc.) are not unbreakable. The domestic GOST algorithm, in comparison with them, is characterized by increased complexity both when generating replacement nodes and when generating keys. Also, for the GOST algorithm, there is a high probability of generating an unstable key, which in some encryption modes reduces its effective key length from 2,256 to 2,62.

Triple DES is a more proven algorithm than IDEA and provides acceptable performance. The Triple DES algorithm is the application of the DES algorithm three times to the same data, but with different keys.

DES has penetrated into Russia and is quite widely used in practice as an integral part of various software and hardware, of which the most widely known are the S.W.I.F.T. system, secret VISA and EUROPAY modules, secret modules of ATMs and trading terminals, and, finally, smart cards. Particularly intense discussions around data encryption algorithms are caused by smart cards. At the same time, there are serious reasons to believe that the reliability of domestic cryptosystems of conversion origin will be superior to their foreign counterparts.

However, Russian legislation, like the legislation of many other countries, only allows the use of national encryption standards.

The GOST 28147-89 algorithm is built on the same principle as DES; it is a classic block cipher with a secret key, but differs from DES in a larger key length, a larger number of rounds, and a simpler scheme for constructing the rounds themselves. In table 2 shows its main parameters, for convenience - in comparison with the parameters of DES.

Table 2. Comparison of parameters of DES and GOST ciphers

If secret information needs to be exchanged between persons who trust each other, i.e. members of the same organization can use symmetric cryptography. Of course, both (or more) parties must already have encryption keys for interaction.

To briefly describe the information exchange scenario, it is as follows:

  • an existing file containing secret information is created or used;
  • the file is encrypted using a key known to both parties, determined by the encryption algorithm;
  • the encrypted file is transferred to the subscriber, the storage medium is not so important, it can be a floppy disk, e-mail, a message on the network or a modem connection; it is very convenient, to reduce the risk, to also store all files containing secret information in encrypted form. Then, if a computer, a laptop of a business trip employee, or a hard drive falls into the hands of an attacker, the files locked with the key will not be available for direct reading. Nowadays the world uses systems that automatically encrypt all information stored in a laptop; they also provide a forced login mode; if an employee is forced to boot a laptop, then by entering a special password instead of the usual one, all information can be destroyed; of course, a recovery mode is provided after this action. The hard drive can simply be removed from the computer; it is not so difficult to remove it from the protected area (compared to a whole computer);
  • on the receiving side, the legitimate recipient, in possession of the key, opens the encrypted files for further use.

Many modern methods of protective transformations can be classified into four large groups: permutations, substitutions (substitutions), additive and combined methods. Permutation and substitution methods are usually characterized by a short key length, and the reliability of their protection is determined by the complexity of the conversion algorithms. Additive methods are characterized by simple conversion algorithms, and their cryptographic strength is based on increasing the key length.

Breaking the cipher

There is a way to break a cipher based on trying all the key options. The criterion for the correctness of a variant is the presence of a “probable word” in the text.

The set of all possible keys is enumerated, and the ciphertext is decrypted using each key. In the resulting “pseudo-open” text, a probable word is searched. If there is no such word, the current text is rejected and the transition to the next key is carried out. If such a word is found, a variant of the key is displayed on the screen. Then the search for keys continues until the entire set of options is exhausted. It is possible to detect several keys in which there is a probable word in the “pseudo-plaintexts”.

After the search is completed, it is necessary to decrypt the text using the found keys. “Pseudo-plaintext” is displayed on the screen for visual control. If the operator recognizes the text as open, then the opening work ends. Otherwise, this key option is rejected and the transition to the next key is carried out.

The brute force method can be combated by increasing the length of the encryption key used. Moreover, increasing its length by only 8 bits increases the number of search options by 2 8 times, respectively, by 64 bits – by 2 64 times.

Among the problems inherent in the use of cryptographic encryption algorithms, it is necessary to highlight the problem of key distribution. Before communicating parties can send encrypted messages to each other, they must exchange encryption keys over some secret channel. In addition, a huge number of keys must be kept up to date in the information exchange system.

Cryptographic encryption algorithms do not make it possible to establish the integrity of the received message (i.e., to ensure that the message was not modified during transmission). Authorship can only be confirmed by possession of a specific key, so anyone who becomes the owner of someone else's key will be able to pass off their messages as messages sent from another user.

The problem of distributing secret keys over a public communication channel can be solved by the Diffie-Hellman algorithm. But this algorithm belongs to asymmetric cryptographic algorithms. They use two keys: public and private.

Asymmetric cryptographic algorithms received rapid development in the 70s. last century. Such algorithms can also solve the problems of confirming authorship and authenticity, making it possible to organize the exchange of encrypted information between parties that do not trust each other. In addition, the use of asymmetric algorithms reduces by an order of magnitude the number of keys that must be distributed between interacting parties. Asymmetric encryption systems include a publicly available database of public keys that can be distributed over open communication channels and their disclosure will in no way lead to compromise of the system, which is why they are called open.

Asymmetric encryption algorithms

Public key cryptosystems are usually built on the basis of a complex mathematical problem of calculating a function that is the inverse of a given one. Such functions are called unidirectional, i.e. their reversal is an almost impossible task. The essence of the encryption method is that the function of an encrypted message is calculated in the forward direction using the public key of the receiving subscriber, and when decrypting (calculating the inverse function) its secret key is used. As one might expect, little is known about mathematical problems that satisfy the listed requirements, and only a few of them have been used to construct ciphers used in practice. Let's consider a number of the most well-known public key cryptosystems.

  • RSA. The problem of factorization (calculating prime factors) of a large integer is used. Constructed based on the multiplication of two large-digit prime numbers. Widely used in cryptographic privacy and authentication protocols.
  • El-Gamal. Based on the discrete logarithm problem in a finite field. Used in electronic digital signature (EDS) standards DSS, GOST R34.10-94, etc.
  • Elliptic curves. Based on the discrete logarithm problem on elliptic curves in a finite field.

Inverse problems of factorization and discrete logarithmization are solved by methods close to exhaustive search, and when the numbers are large, they are difficult to calculate.
Public key cryptosystems are used mainly in three areas:

  • information closure;
  • authentication using digital signature;
  • interception-protected distribution of public keys (Diffie-Hellman cryptosystem).

The advantages and disadvantages of asymmetric cryptosystems are discussed in more detail in.

Hash functions

Protocols for protecting integrity and authenticity when generating imitations and digital signatures use cryptographic “compression” hash functions that make it possible to obtain a value with a fixed number of bits from a data block of arbitrary length.
In order to reduce the volume of digital signature and reduce the time for its generation and verification, it is applied to hash values, which are usually much shorter than the original messages. A number of requirements are imposed on cryptographic hash functions aimed at making it difficult to forge digital signatures by finding such a modification of a data block in which the hash function value and, consequently, the digital signature remain unchanged.
The most widely used hash functions are based on a system of cyclically repeated permutations and substitutions (the length of the generated hash value in bits is indicated in parentheses):

  • MD5 (128);
  • SHA-1 (160);
  • GOST (256).

Table 1. List and parameters of hash functions

Hash function

Value length, bits

Block size, bits

Performance, Mb/s

Note

no data

Developed by Ron Rivest in 1989.
Collisions detected in simplified compression function

Developed by Ron Rivest in 1990.
Collisions detected

Developed by Ron Rivest in 1991.
Collisions detected in the compression function

Developed in 1995 in the European RIPE project

Developed in 1995 by NIST

GOST of Russia

In table 1 does not show rarely used and exotic hash functions, as well as hash functions built on symmetric block ciphers according to the Meyer–Matyas and Davies–Price schemes.
The mentioned hash functions are described in more detail in.
Although public key cryptographic protection or asymmetric cryptosystems have been especially widely used since the late 70s. , they have a very serious drawback - extremely low performance. In this regard, in practice a combined cryptographic protection scheme is usually used. When establishing a connection and authenticating the parties, “public key” cryptography is used, then a session key is generated for symmetric encryption, which closes all traffic between subscribers. The session key is also distributed using the public key.



Rice. 2. Scheme of the asymmetric cryptosystem algorithm

Table 2. Asymmetric cryptosystems

Method name

Hacking method
(mat. problem)

Cryptographic strength, MIPS

Note

2.7 1028 for 1300 bit key

Developed in 1977 by Ron Rivest, Adi Shamir and Leonard Eidelman.
Included in many standards

factorization of large prime numbers

El-Gamal
(El Gamal)

finding a discrete logarithm in a finite field

with the same key length, the cryptographic strength is equal to RSA

Designed by ElGamal. Used in the digital signature algorithm of the DSA standard DSS

Elliptic equations

solving elliptic equations

cryptographic strength and speed are higher than RSA

Modern direction. Developed by many leading mathematicians

The RSA method is currently the de facto standard in information security systems and is recommended by the CCITT (Consultative Committee in International Telegraphy and Telephony) in the X.509 standard. RSA is used in many international standards (S-HTTP, PEM, S-MIME, S/WAN, STT, SSL, PCT, SWIFT, ANSI X.9.31, etc.), in credit card processing systems, in operating systems for protection of network protocols.
A huge amount of scientific research has been carried out on the RSA and El-Gamal methods, a large number of methods for their cryptanalysis and protection against attacks have been studied, and cryptographic strength has been calculated in detail depending on the key length and other parameters. Both methods have the same cryptographic strength (with the same key length) and approximately the same speed. Considering that the elliptic curve method is undergoing testing and has not been subjected to as many hacking attempts as the RSA and El-Gamal methods, the use of the latter two in encryption systems looks preferable.
A detailed description of these algorithms is given in.

Electronic digital signature

If information is exchanged between parties that do not trust each other or are interested in carrying out actions directed against each other (bank and client, store and buyer), it is necessary to use asymmetric encryption methods, as well as the digital signature method.
It is necessary to ensure not only confidentiality, but also the integrity of the message (the inability to replace the message or change anything in it), as well as authorship. In addition, it is necessary to prevent the author of the message from refusing to send a signed message.
An electronic signature of a document allows you to establish its authenticity. In addition, cryptographic measures provide protection against the following malicious actions:

  • refusal (renegade) - subscriber A declares that he did not send a message to B, although in fact he did;
  • modification (alteration) – subscriber B changes the document and claims that he received this document (modified) from subscriber A;
  • substitution - subscriber B generates a document (new) and states that he received it from subscriber A;
  • active interception - an intruder (connected to the network) intercepts documents (files) and changes them;
  • “masquerade” – subscriber B sends a document on behalf of subscriber A;
  • repeat – subscriber B repeats a previously transmitted document that subscriber A sent to subscriber B.

All of the above types of malicious actions cause significant damage. In addition, the possibility of malicious acts undermines trust in computer technology. The authentication problem can be solved based on a cryptographic approach by developing special algorithms and programs.
When choosing an authentication algorithm and technology, it is necessary to provide reliable protection against all of the above types of malicious actions (threats). However, within the framework of classical (single-key) cryptography, it is difficult to protect against all of the above types of threats, since there is a fundamental possibility of malicious actions by one of the parties who owns the secret key.
No one can prevent a subscriber, for example, from generating any document, encrypting it using an existing key common to the client and the bank, and then declaring that he received this document from a legitimate transmitter.
It is effective to use schemes based on two-key cryptography. In this case, each transmitting subscriber has its own secret signature key, and all subscribers have the non-secret public keys of the transmitting subscribers.
These public keys can be interpreted as a set of verification relations that allow one to judge the truth of the signature of the transmitting subscriber, but do not allow one to recover the secret signature key. The transmitting subscriber is solely responsible for his or her private key. No one but him is able to generate a correct signature. The secret key of the transmitting subscriber can be considered as a personal seal, and the owner must in every possible way limit access to it by unauthorized persons. .
To practically implement the idea of ​​open encryption, it was necessary to find specific and constructive answers to the following questions:

  • how to “mix” the user’s individual key with the contents of the document so that they become inseparable?
  • how to check that the contents of the document being signed and the user’s individual key are authentic without knowing either one or the other in advance?
  • How to ensure that the author can reuse the same individual key to digitally sign a large number of electronic documents?
  • How can we guarantee that it is impossible to recover a user’s individual key using any number of electronic documents signed with it?
  • How to guarantee the authenticity of the verification of a digital signature and the contents of an electronic document?
  • How to ensure the legal validity of an electronic document with digital signatures that exists without a paper duplicate or other substitute?

It has taken about 20 years to answer all these questions since the idea was first formulated in 1976 in a paper by Whitfield Diffie and Martin Hellman. Now we can definitely say that all these issues have been resolved: there is a full arsenal of technical means for authorizing electronic documents, called digital signatures. Modern principles for constructing a digital signature system are simple and elegant:

  • methods for calculating and verifying digital signatures of all system users are the same and are based on well-known mathematical problems;
  • methods for calculating digital signature verification keys and individual digital signature generation keys are also the same for everyone and are well known;
  • individual keys for generating digital signatures are selected by the users themselves according to a random law from a large set of all possible keys;
  • With a specific digital signature algorithm, its strength can be assessed without involving any “secret” information based only on known mathematical results and reasonable assumptions about the computing power of a potential attacker.

Cryptographic protection means ensure the authenticity and authenticity of information, in addition to solving the problem of maintaining its confidentiality. These functions are performed by digital signature technology.
The digital signature operation diagram is shown in Fig. 3.



Rice. 3. Electronic digital signature algorithm

The input of the algorithm is a file, not necessarily a text one, the main requirement for the input parameters of the digital signature is a fixed length, for this a hash function is used.
Theoretically, the use of various encryption means promises bright prospects for all companies that use the Internet in their activities, but here a new problem arises - to find a compromise with the state and its laws, this problem is covered in detail in.
In accordance with the Federal Law “On Electronic Digital Signature” No. 1-F3 of January 10, 2002, an electronic digital signature in an electronic document is recognized as equivalent to a handwritten signature in a paper document. Legal regulation is also provided for the organization of electronic document management, distribution of public and private keys, construction of certification centers, and the responsibilities of the parties are determined.
The adoption of this law, although there are some uncertainties in it, made it possible to regulate the use of asymmetric encryption means, in this case digital signature, to protect data on the Internet.

Literature

  1. Shannon C.E. Communication Theory of Secrecy Systems. Bell Systems Technical Journal 28, 1949, p. 656 - 715.
  2. Federal Information Processing Standards Publication 46-2. Data Encryption Standard (DES). NIST, US Department of Commerce, Washington D.C., 1993.
  3. GOST 28147-89. Information processing systems. Cryptographic protection. Cryptographic conversion algorithm.
  4. Bruce Schneier, Applied Cryptography: Protocols, Algorithms and Source Code in C. John Willey & Sons, 1994.
  5. Nechvatal James. Public-Key Cryptography. NIST, Gaithersburg, 1990.
  6. Weiner M. Efficient DES key search: Technical Report TR-244, School of Computer Science, Carleton University, 1994.
  7. Odlyzko A.M. The Future of Integer Factorization. Cryptobytes, RSA Laboratories.- vol. 1, N 2, 1995, p. 5 - 12.
  8. Rogaway P. The security of DESX. Cryptobytes, RSA Laboratories, vol. 2, N 2, 1996, p. 8 - 11.
  9. Kaliski B., Robshaw M. Multiple encryption: weighing security and performance. //Dr. Dobb's Journal, January 1996, p. 123 - 127.
  10. Rivest R.L. The RC5 Encryption Algorithm. Cryptobytes, RSA Laboratories, vol. 1, N 1, 1995, p. 9 - 11.
  11. Kaliski B., Yiqun Lisa Yin. On the Security of the RC5 Algorithm. Cryptobytes, RSA Laboratories, vol. 1, N 2, 1995, p. 12.
  12. Oleinik V. Cycles in the algorithm for cryptographic data conversion GOST 28147-89. http://www.dekart.ru
  13. Andrey Vinokurov. GOST 28147-89 encryption algorithm, its use and implementation for Intel x86 platform computers.
  14. What is Blowfish? http://www.halyava.ru/aaalexey/CryptFAQ.html.
  15. Linn J. Privacy Enhancement for Internet Electronic Mail: Part I: Message Encryption and Authentication Procedures. RFC 1421, 1993.
  16. Evtushenko Vladimir. Triple DES. New standard? http://www.bgs.ru/russian/security05.html.
  17. What is GOST28147-89? http://www.halyava.ru/aaalexey/GOST.html.
  18. Andrew Jelly. /Cryptographic standard in the new millennium/, http://www.baltics.ru/~andrew/AES_Crypto.html.
  19. Rijndael encryption algorithm. http://www.stophack.ru/spec/rijndael.shtml.