Logical structure of main memory. Each memory cell has its own unique, i.e. memory cells and their addresses

Computer architecture

Basic elements of a computer.

The computer consists of 4 structural components:

1) Processor.

Monitors the actions of the computer and also performs data processing functions. If a system has only one processor, it is often called central processor(CPU – central processing unit)

2) Main memory.

This is where data and programs are stored. This memory is temporary. It is often called real or RAM.

3) I/O devices.

They serve to transfer data between the computer and the external environment, consisting of various peripheral devices, which includes secondary memory, communication equipment and terminals.

4) System bus. Certain structures and mechanisms that provide interaction between the processor, main memory and input/output devices.

Main memory

PC – program counter

IR – command register

MAR – memory address register

MBR – memory buffer register

I/O AR – input/output address register

I/O BR – I/O buffer register

Figure 1. Computer components: general structure.

One of the processor's functions is to communicate with memory. To do this, it usually uses two internal (relative to the processor) registers: the memory address register (MAR), which stores the address of the memory cell in which the read-write operation will be performed, and the memory buffers register - MBR), where data intended to be written to memory or those that were read from it are stored. Similarly, the I/O device number is specified in the I/O address register (I/O AR). The I/O buffer register (I/O BR) is used to exchange data between the I/O device and the processor.

A memory module consists of many numbered cells. Each cell can contain a binary number that is interpreted either as a command or as data. The I/O module is used to transfer data from external devices both to the processor and memory, and to reverse direction. It has its own internal buffers for temporary data storage.

Processor registers

The processor has a set of registers that represent a memory area quick access, but much smaller capacity than main memory.

Processor registers (perform two functions) are divided into 2 groups:

User accessible registers. These registers allow the programmer to reduce the number of accesses to main memory by optimizing register usage using machine language or assembler.

Control registers and status registers. Used in the processor to control the operations being performed; with their help, privileged operating system programs can monitor the execution of other programs.

Registers available to the user:

Data registers. Can be used by any machine command to perform data operations. Often this is superimposed certain restrictions. For example, some registers are designed for floating-point operations, while others are for storing integers.

Address registers. They contain the addresses of commands and data in the main memory; These registers can only store a portion of the address that is used in calculating the full or effective address.

Control and status registers.

Various registers are used to control the operation of the processor. On most machines, these registers are largely inaccessible to the user.

In addition to the mentioned registers MAR, MBR, I/O AR, I/O BR, the following are important for executing commands:

Program counter (PC). Contains the address of the command to be fetched from memory.

Instruction register (IR). Contains the last command selected from memory.

All processors also include a register known as the program status word (PSW) register. It typically contains condition codes and other status information, such as the interrupt enable/disable bit or the system/user mode bit.

Condition codes (known as flags) are a sequence of bits that are set or cleared by the processor depending on the outcome of the operations performed. For example, an arithmetic operation may result in a positive number, a negative number, a zero, or an overflow.

Memory cell Computer memory consists of individual “particles” of bits, combined into groups (registers) of 8 bits (bytes). 1 byte is an elementary unit of memory. Each byte has its own number (address), and contents binary code. When the processor processes information, it finds the desired cell at a memory address, reads the contents, performs the necessary actions and writes the result to another memory cell. Memory cell group of consecutive bytes internal memory. Machine word contents of a memory cell The width of the memory cell and the size of the machine word in bits are equal to the processor width






High-speed memory, implemented in the form electronic circuit. RAM is available for reading and writing information. It is in RAM that the currently running program and the data necessary for it are stored; in RAM, data can be edited, deleted, or added. This is temporary storage memory. RAM stores information only during a session of working with a computer - after turning off the computer from the network, the data stored in RAM is lost forever. RAM is a volatile device. Capacity modern models ranges from 512 to 1024 MB. RAM – random access memory (RAM – random access memory).


ROM - permanent memory (ROM - read only memory - read-only memory). In many computers, the ROM is implemented as a separate microcircuit, in which the basic commands, carrying out the initial interaction of hardware and software. This type of memory is read-only. After turning off the computer, the information is saved. ROM is a non-volatile device. The ROM contains part of the operating room BIOS systems(Basic Input-Output System).


Cache memory – intermediate memory between RAM and ROM “Cache” - hiding place, warehouse (English word). Used to increase computer speed. The “secrecy” of the cache lies in the fact that it is invisible to the user and the data stored there is inaccessible to application software. Using this type of internal memory reduces the number of accesses to hard drive. The absence of cache memory can significantly (20-30%) reduce overall performance computer.


Non-volatile memory(CMOS memory, Complementary Metal-Oxid-Semicondactor) Various options Computer configurations, such as the number and type of disk drives, type of video adapter, presence of a coprocessor and some other data, are stored in so-called CMOS memory. The CMOS memory chip also contains ordinary Digital Watch. Thanks to them, you can find out at any time current date and time. To ensure that when the computer's power is turned off, the contents of the CMOS memory are not erased, and the clock continues to count down time, the CMOS memory chip is powered by a special small battery or accumulator, which is also located on the system board



Over the past week, I explained to people twice how working with memory in x86 is organized, so as not to have to explain for the third time, I wrote this article.

And so, in order to understand the organization of memory, you will need to know some basic concepts, such as registers, stack, etc. I’ll try to explain this in detail along the way, but very briefly because this is not the topic for this article. So, let's begin.

As a programmer knows, when writing programs he works not with a physical address, but only with a logical one. And only if he programs in assembler. In the same C, memory cells are already hidden from the programmer by pointers, for his own convenience, but roughly speaking, a pointer is another representation of a logical memory address, and in Java there are no pointers, a completely bad language. However, a competent programmer would benefit from knowledge of how memory is organized, at least on a general level. In general, programmers who don’t know how a machine works really upset me, usually this is Java programmers and other PHP guys with subpar qualifications.

Okay, enough of the sad stuff, let's get down to business.
Consider the address space program mode 32 bit processor(for 64 bits everything is the same)
The address space of this mode will consist of 2^32 memory cells numbered from 0 to 2^32-1.
The programmer works with this memory, if he needs to define a variable, he simply says a memory cell with an address such and such will contain such and such a type of data, while the programmer himself may not know what number this cell has, he will simply write something like :
int data = 10;
The computer will understand it this way: you need to take some cell with the number hundred hundred and place the integer number 10 in it. Moreover, you will not know about the address of cell 18894, it will be hidden from you.

Everything would be fine, but the question arises: how does the computer search for this memory cell, because our memory may be different:
Level 3 cache
Level 2 cache
1 cache level
main memory
HDD

This is all different memories, but the computer easily finds which of them contains our int data variable.
This issue is resolved by the operating system together with the processor.
The entire subsequent article will be devoted to the analysis of this method.

The x86 architecture supports the stack.

A stack is a contiguous area random access memory organized according to the principle of a stack of plates, you cannot take plates from the middle of the stack, you can only take the top one and you can also only place the plate on the top of the stack.
In the processor, special machine codes are organized for working with the stack, the assembly mnemonics of which look like this:

Push operand
pushes the operand onto the stack

Pop operand
pops a value from the top of the stack and puts it into its operand

A stack in memory grows from top to bottom, which means that when you add a value to it, the address of the top of the stack decreases, and when you pop from it, the address of the top of the stack increases.

Now let's briefly look at what registers are.
These are memory cells in the processor itself. This is the fastest and most expensive type of memory; when the processor performs some operations with a value or with memory, it takes these values ​​directly from the registers.
The processor has several sets of logic, each of which has its own machine codes and its own sets of registers.
Basic program registers These registers are used by all programs to process integer data.
Floating Point Unit registers (FPU) These registers operate on floating point data.
There are also MMX and XMM registers, these registers are used when you need to execute one instruction on big amount operands

Let's take a closer look at the main program registers. These include eight 32-bit registers general purpose: EAX, EBX, ECX, EDX, EBP, ESI, EDI, ESP
To place data into a register, or to remove data from a register to a memory cell, use the mov command:

Mov eax, 10
loads the number 10 into the eax register.

Mov data, ebx
copies the number contained in the ebx register to the data memory location.

The ESP register contains the address of the top of the stack.
In addition to general purpose registers, the main program registers include six 16-bit segment registers: CS, DS, SS, ES, FS, GS, EFLAGS, EIP
EFLAGS displays bits, called flags, that reflect the state of the processor or characterize the progress of previous instructions.
The EIP register contains the address next command, which will be executed by the processor.
I will not describe the FPU registers, since we will not need them. So our short digression about registers and the stack is over, let’s move back to memory organization.

As you remember, the purpose of the article is a story about transformation logical memory in the physical, in fact, there is still an intermediate stage and the complete chain looks like this:

Logical address --> Linear (virtual) --> Physical
The entire linear address space is divided into segments. Each process's address space has at least three segments:
Code segment.(contains commands from our program that will be executed.)
Data segment.(Contains data, that is, variables)
stack segment, which I wrote about above.


The linear address is calculated using the formula:
linear address = Base address of the segment (in the picture this is the beginning of the segment) + offset
Code segment
The base address of the code segment is taken from the CS register. The offset value for the code segment is taken from the EIP register, which stores the address of the instruction, after the execution of which the EIP value is increased by the size of this instruction. If the command takes 4 bytes, then the EIP value is increased by 4 bytes and will already point to the following instructions. All this is done automatically without the participation of a programmer.
There may be several code segments in our memory. In our case, there is only one.
Data segment
Data is loaded into registers DS, ES, FS, GS
This means that there can be up to 4 data segments. In our picture he is alone.
The offset within the data segment is specified as the operand of the instruction. By default, the segment pointed to by the DS register is used. In order to enter another segment, you must directly indicate this in the segment replacement prefix command.
Stack segment
The stack segment used is determined by the value of the SS register.
The offset within this segment is represented by the ESP register, which points to the top of the stack, as you remember.
Segments in memory can overlap each other, moreover base address of all segments may coincide, for example, at zero. This degenerate case is called a linear memory representation. IN modern systems, memory is usually organized this way.

Now let's look at the definition of the base addresses of a segment, I wrote that they are contained in the SS, DS, CS registers, but this is not entirely true, they contain a certain 16-bit selector that points to a certain segment descriptor, which already stores the required address.


This is what the selector looks like; its thirteen bits contain the index of the descriptor in the descriptor table. It’s not tricky to calculate that 2^13 = 8192 is maximum amount descriptors in the table.
In general, there are two types of descriptor tables GDT and LDT The first is called the global descriptor table, there is always only one of it in the system, its starting address, or rather the address of its zero descriptor, is stored in the 48-bit system register GDTR. And from the moment the system starts, it does not change and does not take part in the swap.
But the values ​​of the descriptors may change. If the selector contains the TI bit equal to zero, then the processor simply goes to GDT and searches the index for the required descriptor with which to access this segment.
So far everything has been simple, but if TI is 1 then this means that LDT will be used. There are many of these tables, but they are used in this moment will be the one whose selector is loaded into the LDTR system register, which, unlike GDTR, can change.
The selector index points to a descriptor, which no longer points to the base address of the segment, but to the memory in which the local descriptor table is stored, or rather its zero element. Well, then everything is the same as with GDT. This way, during operation, local tables can be created and destroyed as needed. LDTs cannot contain descriptors to other LDTs.
So we know how the processor gets to the descriptor, and let’s see what is contained in this descriptor in the picture:

Descriptors consist of 8 bytes.
Bits 15-39 and 56-63 contain the linear base address of the segment described by this segment descriptor. Let me remind you of our formula for finding a linear address:

linear address = base address + offset
With this simple operation, the processor can access to the right address linear memory.
Let's look at the other bits of the descriptor, the Segment Limit or limit is very important, it has a 20-bit value from 0-15 and 48-51 bits. The limit specifies the size of the segment. For data and code segments, all addresses located in the interval are available:
[base; base+limit)
Depending on the 55 G-bit (granularity), the limit can be measured in bytes at zero value bits and then the maximum limit will be 1 MB, or in the value 1, the limit is measured in pages, each of which is 4kb. And maximum size This segment will be 4GB.
For a stack segment, the limit will be in the range:
(base+limit; top]
By the way, I wonder why the base and limit are so jaggedly located in the descriptor. The fact is that x86 processors developed evolutionarily and in the days of 286x the descriptors were 8 bits in total, while the highest 2 bytes were reserved, but in subsequent processor models, with increasing bit capacity, the descriptors also grew, but to maintain backward compatibility we had to leave the structure as is .
The value of the "top" address depends on the 54th D bit, if it is 0, then the top is 0xFFF(64kb-1), if the D bit is 1, then the top is 0xFFFFFFFF (4Gb-1)
From 41-43 bits the segment type is encoded.
000 - data segment, read only
001 - data segment, read and write
010 - stack segment, read only
011 - stack segment, read and write
100 - code segment, execution only
101 - code segment, read and execute
110 - slave code segment, execution only
111 - slave code segment, execute and read only

44 S bits if equal to 1 then the descriptor describes a real segment of RAM, otherwise the value of the S bit is 0.

The most important bit is the 47th P presence bit. If a bit is equal to 1, it means that the segment or local descriptor table is loaded into RAM, if this bit is 0, then this means that this segment is not in RAM, it is located on the hard disk, an interrupt occurs, a special case of processor operation starts the handler special occasion, which loads the desired segment from hard drive into memory, if P bit is 0, then all descriptor fields lose their meaning and become free to store service information in them. After the handler completes its work, the P bit is set to 1, and the descriptor is accessed again, the segment of which is already in memory.

This concludes the conversion of a logical address to a linear one, and I think we should stop there. Next time I'll cover the second part of the linear to physical conversion.
And I also think it’s worth talking a little about passing function arguments, and about placing variables in memory, so that there is some connection with reality, because placing variables in memory is something that you have to deal with in your work, and not just what These are theoretical speculations for a system programmer. But without understanding how memory works, it is impossible to understand how these same variables are stored in memory.
In general, I hope it was interesting and see you again.

Ministry of Education and Science of the Nizhny Novgorod Region

State budgetary educational institution

secondary vocational education

"Bor Provincial College"

Specialty 230701 Applied Informatics(by industry)

Essay

On the topic: Structure of RAM.

Discipline: Operating systems and environments.

Completed:

student gr. IT-41

Rodov A.E.

Checked:

Markov A.V.

Urban district of Bor

Introduction

Random access memory(from English Random Access Memory) random access memory. RAM ( random access memory) - a volatile part of the system computer memory, in which the executable file is stored while the computer is running. machine code(programs), as well as input, output intermediate data processed by the processor.

1. Structure of RAM

RAM consists of cells, each of which can contain a unit of information - a machine word. Each cell has two characteristics: address and content. Through the address register of the microprocessor, you can access any memory cell.

2. Segmental memory model

Once upon a time, at the dawn of birth computer equipment, the RAM was very small and 2 bytes (the so-called “word”) were used to address it. This approach made it possible to address 64 KB of memory, and the addressing was linear - a single number was used to indicate the address. Later, as technology improved, manufacturers realized that it was possible to support larger amounts of memory, but to do this they needed to make the address size larger. For compatibility with what has already been written software it was decided to do this: addressing is now two-component (segment and offset), each of which is 16-bit, and old programs both used one 16-bit component and know nothing about segments, and continue to work


4. DRAM – Dynamic Random Access Memory

DRAM- this is very old type RAM chips, which have not been used for a long time. Differently DRAM- This dynamic memory with random sampling order. Minimum unit information when storing or transmitting data in a computer is a bit. Each bit can have two states: on (yes, 1) or off (no, 0). Any amount of information ultimately consists of bits that are turned on and off. Thus, in order to save or transmit any amount of data, it is necessary to store or transmit every bit, regardless of its state, of this data.

To store bits of information in RAM there are cells. The cells consist of capacitors and transistors. Here is an approximate and simplified diagram of a DRAM cell:

Each cell can only store one bit. If the cell capacitor is charged, this means that the bit is on; if it is discharged, it is off. If you need to store one byte of data, you will need 8 cells (1 byte = 8 bits). The cells are located in matrices and each of them has its own address, consisting of a row number and a column number.

Now let's look at how reading occurs. First, the RAS (Row Address Strobe) signal is applied to all inputs - this is the address of the row. After this, all data from this line is written to the buffer. Then the CAS (Column Address Strobe) signal is applied to the register - this is a column signal and the bit with the corresponding address is selected. This bit is supplied to the output. But during reading, the data in the cells of the read line is destroyed and must be rewritten by taking it from the buffer.

Now the recording. The WR (Write) signal is applied and information is supplied to the column bus not from the register, but from the memory information input through the switch, defined by address column. Thus, the passage of data when written is determined by a combination of the column and row address signals and the permission to write data to memory. When writing, data from the row register is not output.

It should be taken into account that the matrices with cells are arranged like this:

This means that not one bit will be read at a time, but several. If there are 8 matrices located in parallel, then one byte will be read at once. This is called bit depth. The number of lines along which data will be transmitted from (or to) parallel matrices is determined by the width of the input/output bus of the microcircuit.
When talking about the operation of DRAM, one point must be taken into account. The whole point is that capacitors cannot store charge indefinitely and it eventually “drains.” Therefore, capacitors need to be recharged. The recharging operation is called Refresh or regeneration. This operation occurs approximately every 2 ms and sometimes takes up to 10% (or even more) of the processor’s working time.

The most important characteristic of DRAM is speed, or simply put, cycle duration + delay time + access time, where cycle duration is the time spent on data transfer, delay time is initial installation row and column addresses, and the access time is the search time for the cell itself. This garbage is measured in nanoseconds (one billionth of a second). Modern memory chips have speeds below 10 ms.

RAM is controlled by a controller located in the chipset motherboard, or rather in that part of it called North Bridge.

And now, having understood how RAM works, let’s figure out why it is needed at all. After the processor, RAM can be considered the fastest device. Therefore, the main data exchange occurs between these two devices. All information in personal computer stored on the hard drive. When you turn on the computer, drivers, special programs and elements of the operating system are written to RAM (Random Access Memory) from the screw. Then those programs - applications that you will launch will be recorded there. Closing these programs will erase them from RAM. Data recorded in RAM is transferred to the CPU (Central Processing Unit), where it is processed and written back. And so all the time: they gave a command to the processor to take bits at such and such addresses, somehow process them there and return them to their place or write them to a new one - he did just that.

All this is good as long as there are enough RAM cells. And if not? Then the swap file comes into play. This file is located on the hard drive and everything that does not fit into the RAM cells is written there. Since the speed of the screw is significantly lower than RAM, the operation of the paging file greatly slows down the system. In addition, it reduces the longevity of the hard drive itself.

Increasing the amount of memory does not lead to an increase in its performance. Changing the memory size will not affect its operation in any way. But if we consider the operation of the system, then it’s a different matter. If you have enough RAM, increasing the volume will not lead to an increase in system speed. If there are not enough RAM cells, then increasing their number (in other words, adding a new one or replacing an old one with a new one with a larger memory capacity) will speed up the system.

Each memory cell has its own unique, i.e., different from all others, address. In this case, the main memory has a single address space for RAM and permanent storage devices. Address space defines the maximum possible number of main memory cells that can be directly addressed. It depends on the width of the address buses, since the maximum number of different addresses is determined by the variety binary numbers, with the help of which these addresses are represented. In turn, this diversity depends on the number of digits. Thus, the address space is equal to , where is the bit width of the address code bus.

Example 3.5. The Intel 8086 processor (1978) had a 20-bit address code bus. In this case, 2 20 cells with a capacity of 1 byte each can be directly addressed. Therefore, the address space will be 2 20 bytes = 1 MB.

The Intel 80486 processor (1989) had a 32-bit address code bus. Its address space was 2 32 bytes = 2 2 2 30 bytes = 2 2 GB = 4 GB.

Starting with the processor Intel Pentium Pro (1995) introduced the ability to use Physical Address Extension (PAE) mode, which uses 36 bits for addressing. In this case, 2 36 bytes = 2 6 · 2 30 bytes = 2 6 GB = 64 GB can be addressed.

There are two ways to address memory in computers - real mode and protected mode. Real mode used in the MS DOS operating system. Calculation of physical address in real mode is carried out according to the rule

CS 16 10 16 + IP 16,

where CS, IP are the segment and offset values ​​specified in the corresponding processor registers.

So the maximum physical address is

FFFF 16 10 16 + FFFF 16 = FFFF0 16 + FFFF 16 = 10FFEF 16 = 1114095 10,

and the address space is 1114096 bytes = 1 MB + 64 KB – 16 bytes.

In addition, this address space can be limited by the width of the Intel 8086 processor address code bus, i.e., by the number 2 20 bytes = 1 MB.

That part of RAM that cannot be directly addressed is called expanded memory.

Example 3.6. The computer is based on an Intel 80486 processor and has 16 MB of RAM. The processor can directly address 1 MB + 64 KB – 16 bytes of RAM. Then the expanded memory will be 16 MB -
–(1 MB + 64 KB – 16 bytes) = 15 MB – 64 KB + 16 bytes.

So the relationship between directly addressable and extended memory would be:

1,114,096 bytes: 15,663,120 bytes or 6.64: 93.36.

Consequently, in real operation mode, more than 90% of the computer's RAM will be inaccessible.

There are two ways to access extended memory in real computer mode. However, they are only possible when using special programs - drivers according to the XMS and EMS specifications.

Driverspecial program, work manager RAM or external device computer and organizes the exchange of information between the processor, RAM and external devices.

Note. The driver that controls memory operation is called a memory manager.

Access to extended memory according to the XMS specification ( eXtended Memory Specification) is organized using XMM type drivers (for example, HIMEM.SYS). According to EMS specification ( Expanded Memory Specification) access to extended memory is implemented by mapping, as necessary, its individual fields into a specific area of ​​directly addressable memory. In this case, it is not the information being processed that is stored, but only the addresses that provide access to it. To organize memory according to the EMS specification, the EMM386.EXE or Quarterdeck EMM drivers are used.

IN protected mode During computer operation, memory of a larger capacity than in real life can be directly addressed by changing the addressing mechanism. Thanks to protected mode, only that part of the program that is needed at the current time can be stored in memory. The rest can be stored in the computer's external memory, such as a hard drive. When accessing a part of the program that is not currently in memory, operating system pauses the program, loads the required fragment from external memory program code and then resumes execution of the program. This procedure is called data swapping from the hard drive. Thus, in protected mode, it becomes possible to execute programs whose code size exceeds the amount of RAM on the computer.

A physical address in protected mode is formed as follows. The processor segment register stores a two-byte selector, which contains the following information:

■ index descriptor(13 bits) in the descriptor table;

■ a flag (1 bit) that determines which of the two descriptor tables (local or global) will be accessed;

■ requested privilege level (2 bits).

In accordance with the value of the selector, access is made to the desired table descriptors and the descriptor located in it. The segment address, its size and access rights are extracted from the descriptor. The segment address is then added to the offset from the processor's IP register. The resulting amount will be the physical address of the RAM cell.

The use of protected mode made it possible to address Intel processor 80286 (1982) 2 24 bytes = 2 4 2 20 bytes = 16 MB of memory, while real mode address space was still limited to 1 MB.

In addition to increasing the address space, in protected mode parallel execution of several programs is possible ( multitasking mode). Multitasking mode is organized using a multitasking operating system (for example, Microsoft Windows), to which the processor provides a powerful and reliable mechanism for protecting tasks from each other using a four-level privilege system (Fig. 3.7).

In protected mode it is also possible paging memory organization. It comes down to the formation of memory description tables that determine the state of its individual segments (pages). When there is insufficient memory, the operating system writes some data to external memory, and enters information into the description table about the absence of this data in RAM.

Rice. 3.7. Privilege levels when using multitasking mode