Static and dynamic RAM. The principle of memory homogeneity. Abstract: Static memory

THE REPUBLIC OF KAZAKHSTAN

UNIVERSITY "TURAN"

Department of Information Technologies

topic: "Static memory"

Completed by: Ainakulov D.A. 3rd year, "IS" 9 gr. Checked by: Ziyatbekova G.Z.

Almaty 2009


1. Introduction

2. Static memory

4. Types of static memory

5. Conclusion


1. Introduction

Personal computers today have become indispensable human assistants in all areas of human activity without exception. Computers are used to calculate wages and harvest volumes, draw graphs of the movement of goods and changes in public opinion, design nuclear reactors, etc.

The word "computer" means "computer". The need to automate data processing, including calculations, arose a long time ago. Currently, the computer hardware and software industry is one of the most important sectors of the economy of developed and developing countries. The reasons for the rapid growth of the personal computer industry are: low cost; comparative benefit for many business applications; ease of use; the possibility of individual interaction from computers without intermediaries and restrictions; high capabilities for processing, storing and issuing information; high reliability, ease of repair and operation; computer hardware is adaptive to the peculiarities of computer use; the availability of software covering almost all areas of human activity, as well as powerful systems for developing new software. The power of computers is constantly increasing, and the scope of their applications is constantly expanding. Computers can be networked together, allowing millions of people to easily exchange information with computers anywhere in the world. So what is this unique human invention? The first characteristic by which computers are divided is the platform. There are two main PC platforms: The IBM compatible computer platform includes a huge range of different computers, from simple home personal computers to complex servers. This is the type of platform that a user usually encounters. By the way, it is not at all necessary that the best IBM-compatible computers are made by IBM - the “blue giant” that gave birth to this standard is today only one of a great many PC manufacturers. The Apple platform is represented by Macintosh computers, which are quite popular in the West. They use their own special software, and their “stuffing” is significantly different from IBM. Typically, IBM-compatible PCs consist of three parts (blocks): a system unit; monitor (display); keyboard (a device that allows you to enter characters into a computer). The development of the electronics industry is carried out at such a rapid pace that literally in one year, today's “miracle of technology” becomes obsolete due to the fact that computer hardware is constantly being modified and new software appears. However, the principles of a computer have remained unchanged since the famous mathematician John von Neumann prepared a report on the design and operation of universal computing devices in 1945.


2. Static memory

Static memory, or SRAM (Statistic RAM) is the most powerful type of memory. SRAM chips are used for caching RAM, which uses dynamic memory chips, as well as for caching data in mechanical storage devices, in video adapter memory units, etc. In fact, SRAM chips are used where the required amount of memory is not very large, but the performance requirements are high, and if so, then the use of expensive microcircuits is justified. Personal computers with processors that did not have on-chip L2 cache always used external cache SRAM chips. To reduce the cost of motherboards and the possibility of upgrading them, manufacturers of motherboards with processors 486 and the first generations of Pentium installed special beds (sockets for chips with a DIP package), into which it was possible to install various SRAM chips, differing both in speed and memory capacity, and different bit depth. To configure memory, a set of jumpers was provided on the motherboard. For reference, information about the installation of jumpers was painted directly on the system board, for example, as shown in the table (columns JS1 and JS2 indicate the numbers of contacts that need to be connected with jumpers).

Example of a cache configuration table on a motherboard

Size SRAM JS1 JS2
256 K 32x8 1-2 1-2
512 K 64x8 2-3 1-2
1M 128x8 2-3 2-3

Note that the cache memory configuration was changed only when any cache memory chip failed. In other cases, it was not recommended to change the position of the jumpers. Later, as more advanced SRAM chips were developed, they were directly soldered onto the motherboard in quantities of 1, 2 or 4 pieces. On current motherboards, SRAM chips are used primarily only for I/O caching and other system functions.

3. Design of the static memory matrix

Like dynamic cells, triggers are combined into a single matrix consisting of rows and columns, the latter of which are also called bits.

Unlike a dynamic memory cell, which only needs one key transistor to control, a static memory cell is controlled by at least two. This will not seem surprising if we remember that a trigger, unlike a capacitor, has separate inputs for recording logical zero and one, respectively. Thus, as many as eight transistors are consumed per static memory cell (see Fig. 1) - four go, in fact, to the trigger itself and two more to the control “latches”.

Rice. 1. Design of a 6-transistron single-port SRAM memory cell


Moreover, six transistors per cell is not the limit! There are more complex designs! The main disadvantage of a six-transistor cell is that only one row of the memory matrix can be processed at a time. Parallel reading of cells located in different rows of the same bank is impossible, just as reading one cell while writing another is impossible.

Multiport memory does not have this limitation. Each multiport memory cell contains a single flip-flop, but has several sets of control transistors, each of which is connected to its own ROW and BIT lines, so that different matrix cells can be processed independently. This approach is much more progressive than dividing memory into banks. Indeed, in the latter case, parallelism is achieved only when accessing cells of different banks, which is not always feasible, and multi-port memory allows simultaneous processing of any cells, relieving the programmer of the need to delve into the features of its architecture.

The most common is two-port memory, the cell structure of which is shown in Fig. 2. (attention! this is not the same memory that, in particular, is used in the first level cache of Intel Pentium microprocessors). It is easy to calculate that to create one cell of two-port memory, as many as eight transistors are consumed. Let the cache memory capacity be 32 KB, then just one core will require over two million transistors!


Rice. 2. Design of an 8-transistron two-port SRAM memory cell

Rice. 3. Dynamic memory cell embodied in a crystal

4. Types of static memory

There are at least three types of static memory: asynchronous, synchronous, and pipelined. All of them are practically no different from their corresponding types of dynamic memory.

Asynchronous static memory

Asynchronous static memory operates independently of the controller and therefore the controller cannot be sure that the end of the exchange cycle will coincide with the beginning of the next clock pulse. As a result, the communication cycle is extended by at least one clock cycle, thereby reducing effective performance. “Thanks to” the latter circumstance, at present asynchronous memory is practically not used anywhere (the last computers on which it was still used as a second-level cache were “three rubles” - machines built on the Intel 80386 processor).

Synchronous static memory

Synchronous static memory performs all operations simultaneously with clock signals, resulting in a single cell access time within a single clock cycle. It is on synchronous static memory that the first level cache of modern processors is implemented.

Pipelined static memory

Pipeline static memory is synchronous static memory equipped with special "latches" that hold data lines, allowing the contents of one cell to be read (written) in parallel with the address of another cell being passed.

Also, pipeline memory can process several adjacent cells in one working cycle. It is enough to transmit only the address of the first cell of the packet, and the microcircuit will calculate the addresses of the rest on its own - just have time to submit (receive) writing (read) data!

Due to the greater hardware complexity of pipeline memory, the access time to the first cell of a packet increases by one clock cycle, however, this practically does not reduce performance, because all subsequent cells in the packet are processed without delay.

Pipeline static memory is used in particular in the second level cache of Pentium-II microprocessors and its formula looks like this: 2-1-1-1.


5. Conclusion

The history of the creation of static memory goes back centuries. The memory of the first relay computers was static in nature and for a long time did not undergo practically any changes - only the elementary base changed: relays were replaced by vacuum tubes, subsequently supplanted first by transistors, and then by TTL and CMOS microcircuits: but the idea behind static memory, was and remains the same...

Unfortunately, there is a barrier between a person and a computer that is difficult to overcome for many - differences in the methods of input, processing and output of information. Accordingly, there are not many specialists who are well versed in computer hardware, and they are always worth their weight in gold.

Since many people like to assemble a computer on their own, the site provides the most important information about how to assemble and configure a system unit. After all, in order to assemble something sensible and useful for use, you need to have a fairly clear idea of ​​what you are assembling, for what area of ​​application and, of course, from what components. This is roughly how we can formulate the variety of questions that arise before a person when he decides not to buy a ready-made computer, but to assemble it with his own hands, choosing the hardware that he needs. Due to the rapid development of computer technology and also due to the fact that computer hardware is constantly being modified and new models are constantly being sold, some information provided on the site is gradually losing its relevance.


List of used literature

1. Computer science. Textbook / Lomtadze V.V., Shishkina L.P. – Irkutsk: ISTU, 1999. – 116 p.

2. Computer science. Textbook / Ed. V.G. Kiriya. – Irkutsk: ISTU, 1998, part 2. – 382 p.

3. Makarova N.V. Informatics. - Moscow: Finance and Statistics, 1997.

4. Gorev A., Akhayan R., Makasharipov S. Effective work with DBMS. St. Petersburg: Peter, 1997.

Static and dynamic RAM

Chapter 7. PC Storage Devices

After studying this chapter you should know:

storage devices of three levels of internal PC memory: microprocessor, main and buffer cache memory, their purpose, main characteristics;

physical and logical structure of main memory, its modules: SIPP, SIMM, DIMM and types: DRAM, SDRAM, DRDRAM, DDRDRAM;

methods for addressing main memory cells;

principles of virtual memory organization;

purpose of cache memory at different levels.

Personal computers have three main levels of memory:

l microprocessor memory (MPM);

l main memory(RAM);

l external memory (VRAM).

An intermediate buffer or cache memory is added to these levels. In addition, many PC devices have their own local memory.

The two most important characteristics (memory capacity and speed) of the three main types of memory are given in table. 9.1.

Table 9.1. Comparative characteristics of storage devices

The performance of the first two types of storage devices is measured by the access time (t access) to them, and the performance of external storage devices is measured by two parameters: access time (t access) and read speed (V read):

l t arr - the sum of the time for searching, reading and writing information (in the literature this time is often called access time, which is not entirely strict);

l t access - time to search for information on the medium;

l V read - the speed of sequential reading of adjacent bytes of information.

Let us recall the generally accepted abbreviations: s - second, ms - millisecond, μs - microsecond, ns - nanosecond; 1s = 10 6 ms = 10 6 μs = 10 9 ns.

Static and dynamic RAM

RAM can be formed from dynamic (Dynamic Random Access Memory - DRAM) or static (Static Random Access Memory - SRAM) type chips.

Static memory has significantly higher performance, but is much more expensive than DRAM. In static memory, elements (cells) are built on various types of flip-flops - circuits with two stable states. After writing a bit to such a cell, it can remain in this state for as long as desired - all that is required is the presence of power. When accessing a static memory chip, it is supplied with a full address, which, using an internal decoder, is converted into sampling signals for specific cells. SRAM cells have a short response time (several nanoseconds), but microcircuits based on them are characterized by low specific capacity (several Mbits per package) and high power consumption. Therefore, static memory is used mainly as microprocessor memory and buffer memory (cache memory).

In dynamic memory, cells are built on the basis of semiconductor areas with the accumulation of charges - a kind of capacitors - that occupy a much smaller area than flip-flops and consume virtually no energy during storage. The capacitors are located at the intersection of the vertical and horizontal bus bars of the matrix; recording and reading information is carried out by applying electrical impulses along those matrix buses that are connected to the elements belonging to the selected memory cell. When accessing the microcircuit, the matrix row address is first supplied to its inputs, accompanied by the RAS signal (Row Address Strobe), then, after some time, the column address, accompanied by the CAS signal (Column Address Strobe). Since capacitors gradually discharge (the charge is stored in the cell for several milliseconds), in order to avoid loss of stored information, the charge in them must be constantly regenerated, hence the name of the memory - dynamic. Recharging wastes both energy and time, and this reduces system performance.

Dynamic memory cells, compared to static ones, have a longer response time (tens of nanoseconds), but a higher specific density (of the order of tens of Mbits per case) and lower power consumption. Dynamic memory is used to build random access memory devices in the main memory of a PC.

Cache memory

The cache memory has several levels. Levels l1, L2 and L3 are register cache memory - high-speed memory of relatively large capacity, which is a buffer between the OP and MP and allows you to increase the speed of operations. Cache memory registers are not accessible to the user, hence the name cache, which means “cache” in English.

Modern motherboards use a pipelined cache with block access (Pipelined Burst Cache). The cache memory stores copies of data blocks of those areas of RAM that were last accessed, and accesses are very likely in the next clock cycles - quick access to this data allows you to reduce the execution time of the next program commands. When the program is executed, data read from the OP with a slight advance is written to the cache memory. The results of operations performed in the MP are also recorded in the cache memory.

Based on the principle of recording results into RAM, there are two types of cache memory:

l in the write-back cache, the results of operations are recorded before they are written to the OP, and then the cache memory controller independently rewrites this data into the OP;

l in “write-through” cache memory, the results of operations are simultaneously, in parallel, written to both the cache memory and the OP.

Microprocessors, starting from MP 80486, have a cache memory (or 1st level cache - L1) built into the main MP core, which, in particular, determines their high performance. Pentium microprocessors have cache memory separately for data and separately for instructions: the Pentium and Pentium Pro MPs have a small capacity of this memory - 8 KB each, the next versions of the Pentium MP have 16 KB each. Pentium Pro and higher, in addition to the 1st level cache, also have a 2nd level (L2) cache built into the microprocessor board with a capacity of 128 KB to 2048 KB. This on-chip cache runs at either full MP clock speed or half MP clock speed.



It should be borne in mind that for all MPs, additional cache memory of the 2nd (L2) or 3rd (L3) level can be used, located on the motherboard outside the MP, the capacity of which can reach several megabytes (cache on MB refers to the level 3, if the MP installed on this board has a 2nd level cache). The cache access time depends on the clock frequency at which the cache operates, and is usually 1–2 clock cycles. Thus, for the L1 cache memory of a Pentium MP, the access time is 2–5 ns, for the L2 and L3 cache memory this time reaches 10 ns. Cache memory bandwidth depends on both access time and interface bandwidth, and ranges widely from 300 to 3000 MB/s.

Using cache memory significantly increases system performance. The larger the cache memory, the higher the performance, but this relationship is nonlinear. There is a gradual decrease in the growth rate of the overall computer performance as the cache memory size increases. For modern PCs, the performance gain, as a rule, practically stops after 1 MB of L2 cache. L1, L2, L3 cache memory is created based on static memory chips.

Modern PCs also use cache memory between external disk storage devices and RAM, usually referred to as level 3, less often, if there is an L3 cache on the motherboard, to level 4. The cache memory for the VSD is created either in the RAM field or directly in the module of the VSD itself.

Main memory

When considering the structure of main memory, we can talk about both the physical structure, that is, its main structural components, and the logical structure, that is, its various areas, conditionally allocated for organizing more convenient modes of their use and maintenance.


Static and dynamic RAM

RAM is a collection of special electronic cells, each of which can store a specific 8-digit combination of zeros and ones - 1 byte (8 bits). Each such cell has an address (byte address) and content (byte value). The address is needed to access the contents of the cell, to write and read information. Random access memory (RAM) stores information only while the computer is running. The RAM capacity of a modern computer is 32-138 MB.
When a microprocessor performs computational operations, access to any RAM cell must be provided at any time. Therefore, it is called random access memory - RAM (Random Access Memory). RAM is usually made on dynamic type chips with random access (Dynamic Random Access Memory, DRAM). Each bit of such memory is represented as the presence (or absence) of charge on a capacitor formed in the structure of the semiconductor crystal. Another, more expensive type of memory - static (Static RAM, SRAM) uses a so-called static trigger (the circuit of which consists of several transistors) as an elementary cell. The static type of memory has higher performance and is used, for example, to organize cache memory.

Static memory
Static memory (SRAM) in modern PCs is typically used as L2 cache to cache the bulk of the RAM. Static memory is usually made on the basis of TTL, CMOS or BiCMOS microcircuits and, according to the method of data access, can be either asynchronous or synchronous. Asynchronous is data access that can be performed at any time. Asynchronous SRAM was used on motherboards for the third to fifth generations of processors. The access time to cells of such memory ranged from 15 ns (33 MHz) to 8 ns (66 MHz).
To describe the performance characteristics of RAM, so-called read/write cycles are used. The fact is that when accessing memory, reading or writing the first machine word takes more clock cycles than accessing the three subsequent words. So, for asynchronous SRAM, reading one word is performed in 3 clock cycles, writing in 4 clock cycles, reading several words is determined by the sequence 3-2-2-2 clock cycles, and writing - 4-3-3-3.
Synchronous memory provides access to data not at random times, but synchronously with clock pulses. In between, the memory can prepare the next piece of data for access. Most fifth-generation motherboards use a type of synchronous memory - synchronous-pipelined SRAM (Pipelined Burst SRAM), for which the typical time of a single read/write operation is 3 clock cycles, and a group operation takes 3-1-1-1 clock cycles on the first access and 1-1-1-1 for subsequent calls, which speeds up access by more than 25%.

Dynamic memory
Dynamic memory (DRAM) in modern PCs is usually used as general-purpose RAM, as well as memory for the video adapter. Of the types of dynamic memory used in modern and promising PCs, the most famous are DRAM and FPM DRAM, EDO DRAM and BEDO DRAM, EDRAM and CDRAM, Synchronous DRAM, DDR SDRAM and SLDRAM, video memory MDRAM, VRAM, WRAM and SGRAM, RDRAM.
In dynamic type memory, bits are represented as the absence and presence of charge on a capacitor in the structure of a semiconductor crystal. Structurally, it is implemented in the form of a SIMM module (Single in line memory module). Each bit of information is recorded in a separate memory cell consisting of a capacitor and a transistor. The presence of charge on the capacitor corresponds to 1 in binary code, the absence - 0. When switching, the transistor makes it possible to read a bit of information or write a new bit to an empty memory cell.
The search for a cell by address is carried out by special decoding circuits that form a matrix, that is, they intersect the memory crystal with two stripes - horizontally and vertically. When the central processor reports the cell address, the horizontal decoders indicate the desired column, and the vertical decoders indicate the desired row. At the intersection is the desired cell. After finding a cell, its data byte is fetched.

Static memory

Static memory ( SRAM) is typically used as a second-level (L2) cache to cache the bulk of the RAM. Static memory is usually made on the basis of TTL, CMOS or BiCMOS microcircuits and in terms of the method of data access it can be either asynchronous , so synchronous . Asynchronous is called data access that can be performed at any time. Asynchronous SRAM was used on motherboards for the third to fifth generations of processors. The access time to cells of such memory ranged from 15 ns (33 MHz) to 8 ns (66 MHz).

Synchronous memory provides access to data not at random times, but simultaneously (synchronously) with clock pulses. In between, the memory can prepare the next piece of data for access. Most fifth-generation motherboards use a type of synchronous memory - synchronous-pipelined SRAM (Pipelined Burst SRAM), for which the typical time of a single read/write operation is 3 clock cycles, and a group operation takes 3-1 - 1 - 1 clock cycles on the first access and 1 - 1 - 1 - 1 on subsequent calls, which speeds up access by more than 25%.

SRAM uses the so-called static trigger (the circuit of which consists of several transistors). The static type of memory has higher performance and is used, for example, to organize cache memory.

Async SRAM(Asynchronous static memory). This is the cache memory that has been used for many years since the first 386 computer came out with L2 cache. It is accessed faster than DRAM and can, depending on processor speed, use 20-, 15-, or 10-ns access options (the faster the data access time, the faster the memory and the shorter the burst access can be To her). However, as the name implies, this memory is not fast enough for synchronous access, which means that there is still a wait required when accessing the processor, although less than with DRAM.

SyncBurst SRAM(Synchronous batch static memory). With bus frequencies below 66 MHz, synchronous burst SRAM is the fastest memory type available. The reason for this is that if the processor is not running at too high a frequency, the synchronous burst SRAM can provide fully synchronous data output, which means there is no latency when the processor reads bursts 2-1-1 - 1, i.e. the synchronous burst SRAM outputs data in a 2-1-1 - 1 burst cycle. When the processor frequency increases above 66 MHz, synchronous burst SRAM cannot cope with the load and outputs data in 3-2-2-2 bursts, which is significantly slower than using pipelined burst SRAM . Disadvantages include the fact that synchronous stacked SRAM is produced by fewer companies and therefore costs more. Synchronous burst SRAM has address/data times of 8.5 to 12 ns.

There are several key design features of synchronous burst SRAM that make it significantly superior to asynchronous SRAM when used as a high-speed cache:

Synchronization with the system timer. In the simplest sense, this means that all signals are triggered by the edge of a timer signal. Receiving signals at the edge of the timer clock greatly simplifies the creation of a high-speed system;

Batch processing. Synchronous burst SRAMs provide high performance with a small number of logic circuits organizing cyclic memory operation with sequential addresses. The four-address packet sequence can be interleaved for Intel compatibility or linear for PowerPC and other systems.

These features enable the microprocessor to access serial addresses more quickly than is possible with other uses of SRAM technology. Although some vendors have 3.3V asynchronous SRAM with a 15 ns time-to-data time, pipelined synchronous burst SRAM using the same technology can achieve a time-to-data time of less than 6 ns.

PB SRAM(Pipelined Packet Static Memory). A pipeline is a parallelization of SRAM operations using input and output registers. Filling the registers requires an additional initial cycle, but once filled, the registers provide a fast jump to the next address while the current address is being read.

This makes it the fastest cache memory for systems with bus speeds greater than 75 MHz. PB SRAM can operate at bus frequencies up to 133 MHz. It is also not much slower than synchronous burst SRAM when used on slow systems: it outputs data all the time in 3-1-1 - 1 bursts. How good the performance of this memory is can be seen in the address/data time, which ranges from 4.5 to 8 ns.

1-T SRAM. As noted earlier, traditional SRAM designs use a static flip-flop to store a single bit (cell). To implement one such circuit, the board must have 4 to 6 transistors (4-T, 6-T SRAM). Monolithic System Technology (MoSys) announced the creation of a new type of memory in which each bit is implemented on a single transistor (1-T SRAM). In fact, DRAM technology is used here, since it is necessary to periodically regenerate the memory. However, the interface with the memory is made in the SRAM standard, while the regeneration cycles are hidden from the memory controller. 1-T circuits can reduce silicon die size by 50-80% compared to those for traditional SRAM, and power consumption by 75%.


"Scientific and technical articles"- selection scientific and technical articles radio-electronic topics: new products electronic components, scientific developments in the field of radio engineering and electronics, articles By stories development of radio engineering and electronics, new technologies and construction methods and development radio-electronic devices, promising technologies future, aspects and dynamics of development of all areas of radio engineering and electronics, exhibition reviews radio-electronic topics.

The AMIC Technology company is already quite well known in the Russian memory chip market. As a follower of the famous UMC Group, AMIC Technology continues to ride the crest of the wave in the production of a full range of memory products. As for the use of memory chips, there is no point in talking about it much - it is used everywhere. And if everything is more or less clear with permanent memory, then choosing RAM is a rather difficult task. As long as microcircuitry has existed, the question has remained for as long as what is better - slow, difficult to manage, but cheap dynamic memory, or fast, directly interfaced with the processor, but expensive static memory? Perhaps there is now a compromise solution.

How static memory works

Static memory is called static precisely because the information in it is “static”, that is, what I put there, I will take from there after any period of time. Such staticity is achieved by using a conventional trigger as a basic element, assembled, for example, on a pair of transistors.

P-N junctions of transistors, to which constant bias is applied, reliably hold the potential difference, either power or ground (without taking into account the voltage drop at the junction itself), and only two stable states are possible, conventionally called “0” and “1”. Transistors are located on a silicon substrate, inside which P-N junctions are formed.

Thus, the simplest static memory element with a capacity of 1 bit can be considered a flip-flop built on four P-N junctions. Now, if these triggers are sorted, say, into 8, and a 3x8 decoder leg is output to each of them, then you will get a simple memory cell with a capacity of 1 byte, which can already be addressed by sending the corresponding value to the decoder. By building a line of such decryptors and applying a higher order decoder to it, we will already obtain a full-fledged static memory chip. The speed of retrieving data from static memory will be determined only by the time of the transient process in semiconductors, and this speed is quite high. Therefore, the access time to static memory is calculated in units of nanoseconds. As for power consumption, it will be determined mainly by the current through the P-N junctions. And finally, the most attractive aspect of static memory is the possibility of direct interfacing with the processor, since addressing is carried out directly via the address bus indicating the cell number (address).

Despite all the advantages, static memory has some serious disadvantages. What happens if we want to make static memory of a very large size? To do this, in addition to installing a huge number of triggers, you need to somehow manage a decoder for a huge number of pins. It's no secret that the complexity of the decoder increases with the number of addressable objects. A 1x2 decoder is performed on one flip-flop with direct and inverse outputs, 2x4, already on 4 elements, but try making a 10x1024 decoder! And this is only 1 kilobit! Cascading decryptors are used, but speed suffers as a result. Of course, you can do anything, but you have to pay for it, which is proven by the cost of fast, large-capacity static memory.

How dynamic memory works

Michael Faraday, while conducting experiments on the passage of electric current through a capacitor, noticed that the latter was capable of storing information about the initial conditions. This property of a capacitor, or simply capacitance, is used when constructing a dynamic memory element. Consider an uncharged capacitor when the potential difference between its terminals is zero. Let us apply a voltage equal to the supply voltage to the capacitor for some time. What does "for a while" mean? And this is the time during which the charge will have time to flow from the input terminals to the capacitor plates. After this time, we disconnect the capacitor from our source. In theory, this capacitor will store our voltage indefinitely, thus becoming similar to a flip-flop with two transistors.

All this would be good if it were not for real life. An oxide film of some metal (say, aluminum) is used as a dielectric. This dielectric film has, although low, conductivity, and therefore, the capacitor begins to discharge through this oxide film, thereby releasing heat on it and losing information. As soon as the voltage on the capacitor reaches the minimum permissible value, we reconnect our supply voltage to the capacitor and charge it again, after which we remove the terminals. This procedure is the well-known and hated procedure for regenerating dynamic memory, which is carried out by the dynamic memory controller every certain period of time.

To address dynamic memory, not direct processor address signals are used, but processor address signals passed through the dynamic memory controller and also CAS and RAS signals generated by the controller. Dynamic memory has a matrix structure, and the CAS signal gates the sample of a column, and the RAS signal gates the sample of a row in this column. Without CAS and RAS signals, dynamic memory becomes useless, as it can store information without regeneration for only a few microseconds. At first glance, everything about dynamic memory is bad: the use of an external controller and the complexity of management. But there are also significant advantages. It is much easier to create a matrix of capacitors than a matrix of flip-flops; it is enough to “insert” dielectrics in the right places, which means that dynamic memory will be much cheaper than static memory. If you need to create a large dynamic memory, there is also no problem; you need to “insert” dielectrics more often and regenerate faster. Therefore, dynamic memory has become more widespread than static memory.

Dynamic core + static interface = SuperRAM

Someday all dreams come true. A person dreamed of getting dynamic memory with a static interface - and got SuperRAM from AMIC Technology. The idea here is extremely simple. If an additional controller is required to manage dynamic memory, then why not build it into the memory chip itself. The reader will reasonably ask: why is this necessary? After all, modern microprocessors and microcontrollers have dynamic memory interfaces? I answer: yes, you are right, but microcontrollers that have this interface stand out sharply in price, naturally on the higher side. Further, in the vast majority of cases, these are 32-bit processors operating at high clock speeds, and using dynamic memory for them is at least impractical (unless, of course, large amounts are required). Third: most applications still remain eight- and sixteen-bit, where there is no DRAM controller, and the performance is appropriate, but the amount of memory is often very large. It is precisely for such applications that SuperRAM from AMIC Technology exists.

The operation of such memory is quite simple. The regeneration procedure of the SuperRAM dynamic core occurs automatically after a certain time has passed (when the voltage values ​​on the capacitors drop below critical values), and gating occurs constantly. When the processor requests a specific cell, its address comes to the input buffer of the SuperRAM chip. And then, with the very first strobe signal, it is sent to the SuperRAM core, from which values ​​are sampled. It doesn’t matter to the processor that dynamic memory is connected to it; it works with it as with less fast static memory. The advantages of SuperRAM are obvious: direct interface with absolutely any processor or device that has a data bus, addresses and select and write signals, no additional controller required for regeneration is required, large volume due to the presence of a dynamic core, low cost. As an example, here are the technical characteristics of one of the latest representatives of the SuperRAM family from AMIC Technology - the A64E16161 chip:

  1. Volume: 32 Mbit, organized by 2 Mx 16 bit.
  2. Access time at address: 70 ns.
  3. Page access time: 25 ns.
  4. Operating current 20 mA, standby mode current 10 µA.
  5. Fully compatible with SRAM interface. No regeneration or gating required.
  6. Supply voltage from 1.65 to 2.2 V.

The future of SuperRAM

To say that such a solution has a future is to say nothing. Now AMIC Technology has reached the milestone of 32 Mbit, but does not intend to stop there. Already at the beginning of 2004, using 0.13 micron technology, it is planned to begin mass production of SuperRAM series microcircuits with a capacity of 64 Mbit. Access time will also be significantly reduced, and 2.0V power supply for memory chips is one of the advanced features. In terms of their capabilities and cost, such products can compete with existing memory modules, such as SIMM, DIMM, SDRAM and even DDR, which is important when designing new generation systems.