What is the number of cores responsible for? Do you need multi-core processors? How to check how many cores are running

The race for additional performance in the processor market can only be won by those manufacturers who, based on current production technologies, can provide a reasonable balance between clock speed and the number of processing cores. Thanks to the transition to 90- and 65-nm technical processes, it became possible to create processors with a large number cores. To a large extent, this was due to new capabilities for adjusting heat dissipation and core sizes, which is why today we are seeing the emergence of an increasing number of quad-core processors. But what about software? How well does it scale from one to two or four cores?

In an ideal world, programs that are optimized for multithreading allow the operating system to distribute multiple threads across the available processing cores, be it a single processor or multiple processors, single core or multiple. Adding new cores allows for greater performance gains than any increase in clock speed. This actually makes sense: more workers will almost always complete a task faster than fewer, faster workers.

But does it make sense to equip processors with four or even more cores? Is there enough work to load four cores or more? Do not forget that it is very difficult to distribute work between cores so that such physical interfaces, like HyperTransport (AMD) or Front Side Bus (Intel), did not become a bottleneck. There is a third option: the mechanism that distributes the load between the cores, namely the OS manager, can also become a bottleneck.

AMD's transition from one to two cores was almost flawless, since the company did not increase the thermal package to extreme levels, as it did with Intel processors Pentium 4. Therefore, the Athlon 64 X2 processors were expensive, but quite reasonable, and the Pentium D 800 line became famous for its hot job. But 65nm Intel processors and, in particular, Core line 2 changed the picture. Intel was able to combine two Core 2 Duo processors in one package, unlike AMD, resulting in the modern Core 2 Quad. AMD promises to release its own quad-core Phenom X4 processors by the end of this year.

In our article we will look at the Core 2 Duo configuration with four cores, two cores and one core. And let's see how well the performance scales. Is it worth switching to four cores today?

Single core

The term “single-core” refers to a processor that has one computing core. This includes almost all processors from the beginning of the 8086 architecture up to the Athlon 64 and Intel Pentium 4. Until the manufacturing process became thin enough to create two computing cores on a single chip, the transition to a smaller process technology was used to reduce operating voltage, increase clock speeds, or add functional blocks and cache memory.

Running a single-core processor at high clock speeds may give better performance for a single application, but similar processor Only one program (thread) can execute at a time. Intel has implemented the Hyper-Threading principle, which emulates the presence of multiple cores for the operating system. HT technology made it possible to better load the long pipelines of Pentium 4 and Pentium D processors. Of course, the performance increase was small, but the system responsiveness was definitely better. And in a multitasking environment, this can be even more important, since you can do some work while your computer is working on a specific task.

Since dual-core processors are so cheap these days, we don't recommend going for single-core processors unless you want to save every penny.


The Core 2 Extreme X6800 processor was the fastest in the line at the time of release Intel Core 2, operating at 2.93 GHz. Today, dual-core processors have reached 3.0 GHz, albeit at a higher FSB1333 bus frequency.

Upgrading to two processor cores means twice the processing power, but only on applications optimized for multi-threading. Typically, such applications include professional programs that require high processing power. But two nuclear processor still makes sense even if you only use your computer for Email, browsing the Internet and working with office documents. On the one side, modern models Dual-core processors do not consume much more energy than single-core models. On the other hand, the second computing core not only adds performance, but also improves system responsiveness.

Have you ever waited for WinRAR or WinZIP to finish compressing files? On a single-core machine, you are unlikely to be able to quickly switch between windows. Even DVD playback can load one core no less than a complex task. The dual-core processor makes it easier to run multiple applications simultaneously.

AMD dual-core processors contain two full cores with cache memory, an integrated memory controller and a cross-connect that provides sharing to memory and to the HyperTransport interface. Intel took a path similar to the first Pentium D, installing two Pentium 4 cores in the physical processor. Since the memory controller is part of the chipset, system bus must be used both for communication between cores and for accessing memory, which imposes certain limitations on performance. The Core 2 Duo processor features more advanced cores that deliver better performance per clock and best ratio performance per watt. The two cores share a common L2 cache, which allows data exchange without using the system bus.

The Core 2 Quad Q6700 processor operates at 2.66 GHz, using two internal Core 2 Duo.

If today there are many reasons to switch to dual-core processors, then four cores do not yet look so convincing. One reason is limited optimization of programs for multiple threads, but there are also certain architectural problems. Although AMD today criticizes Intel for packing two dual-core dies into a single processor, deeming it not a "true" quad-core CPU, Intel's approach works well because the processors actually deliver quad-core performance. From a production point of view it is easier to obtain high level yield of usable crystals and produce more products with small cores, which can then be combined together to create a new, more powerful product using a new process technology. As for performance, there are bottlenecks - two crystals communicate with each other through the system bus, so it is very difficult to manage multiple cores distributed over several crystals. Although having multiple dies allows for better power savings and adjusting the frequencies of individual cores to suit the needs of the application.

True quad-core processors use four cores, which, along with cache memory, are located on a single chip. What is important here is the presence of a common unified cache. AMD will implement this approach by equipping 512 KB of L2 cache on each core and adding L3 cache to all cores. AMD's advantage is that it will be possible to turn off certain cores and speed up others to get better performance for single-threaded applications. Intel will follow the same path, but not before introducing the Nehalem architecture in 2008.

Output utilities system information, such as CPU-Z, allow you to find out the number of cores and cache sizes, but not the processor layout. You won't know that Core 2 Quad (or quad-core) Extreme Edition, shown in the screenshot) consists of two cores.


In the first years of the new millennium, when CPU frequencies, have finally passed the 1 GHz mark, some companies (let's not point fingers at Intel) predicted that the new NetBurst architecture could reach frequencies of about 10 GHz in the future. Enthusiasts expected the advent of a new era when CPU clock speeds would grow like mushrooms after rain. Need more performance? Just upgrade to a faster clocked processor.

Newton's apple fell loudly on the heads of dreamers who saw megahertz as the easiest way to continue to grow PC performance. Physical limitations did not allow an exponential increase in clock frequency without a corresponding increase in heat generation, and other problems associated with production technologies also began to arise. Indeed, in recent years the fastest processors operate at frequencies from 3 to 4 GHz.

Of course, progress cannot be stopped when people are willing to pay money for it - there are quite a lot of users who are willing to pay a considerable amount for more powerful computer. Therefore, engineers began to look for other ways to increase performance, in particular by increasing the efficiency of command execution, and not just relying on clock speed. Parallelism also turned out to be a solution - if you can’t make the CPU faster, then why not add a second processor of the same kind to increase computing resources?

Pentium EE 840 is the first dual-core CPU to appear in retail.

The main problem with concurrency is that the software must be specifically written to distribute the load across multiple threads - meaning you won't get immediate bang for your buck, unlike frequency. When the first dual-core processors came out in 2005, they didn't offer much of a performance boost because desktop PCs had very little software to support them. In fact, most dual-core CPUs were slower than single-core CPUs in most tasks because single-core CPUs ran at higher clock speeds.

However, four years have passed, and a lot has changed during them. Many software developers have optimized their products to take advantage of multiple cores. Single-core processors are now harder to find on sale, and dual-, triple- and quad-core CPUs are considered quite common.

But the question arises: how much CPU cores is it really necessary? Is a triple-core processor enough for gaming, or is it better to pay extra and get a quad-core chip? Is a dual-core processor enough for the average user, or does more cores really make any difference? Which applications are optimized for multiple cores, and which will respond only to changes in specifications such as frequency or cache size?

We thought it was a good time to test applications from the updated package (however, the update is not finished yet) on single-, dual-, triple- and quad-core configurations to understand how valuable multi-core processors in 2009.

To ensure fair tests, we chose a quad-core processor - an Intel Core 2 Quad Q6600 overclocked to 2.7 GHz. After running the tests on our system, we then disabled one of the cores, rebooted, and repeated the tests. We sequentially disabled the cores and obtained results for different numbers of active cores (from one to four), while the processor and its frequency did not change.

Disabling CPU cores under Windows is very easy to do. If you want to know how to do this, type "msconfig" into Windows window Vista "Start Search" and press "Enter". This will open the System Configuration utility.

In it, go to the "Download/Boot" tab and press the " Extra options/Advanced options".

This will cause the BOOT Advanced Options window to appear. Select the "Number of Processors" checkbox and specify the required number of processor cores that will be active in the system. Everything is very simple.

After confirmation, the program will prompt you to reboot. After rebooting in "Windows Task Manager" ( Task Manager) you can see the number of active cores. The "Task Manager" is called by pressing the Crtl+Shift+Esc keys.

Select the "Performance" tab in the "Task Manager". In it you can see load graphs for each processor/core (whether it is a separate processor/core or a virtual processor, as we get in the case of Core i7 with active Hyper-Threading support) in the item “CPU Usage History”. Two graphs mean two active cores, three - three active cores, etc.

Now that you have become familiar with the methodology of our tests, let us move on to a detailed examination of the configuration of the test computer and programs.

Test configuration

System hardware
CPU Intel Core 2 Quad Q6600 (Kentsfield), 2.7 GHz, FSB-1200, 8 MB L2 cache
Platform MSI P7N SLI Platinum, Nvidia nForce 750i, BIOS A2
Memory A-Data EXTREME DDR2 800+, 2 x 2048 MB, DDR2-800, CL 5-5-5-18 at 1.8 V
HDD Western Digital Caviar WD50 00AAJS-00YFA, 500 GB, 7200 rpm, 8 MB cache, SATA 3.0 Gbit/s
Net Integrated nForce 750i Gigabit Ethernet Controller
Video cards Gigabyte GV-N250ZL-1GI 1 GB DDR3 PCIe
power unit Ultra HE1000X, ATX 2.2, 1000 W
Software and Drivers
operating system Microsoft Windows Vista Ultimate 64-bit 6.0.6001, SP1
DirectX version DirectX 10
Platform Driver nForce Driver Version 15.25
Graphics driver Nvidia Forceware 182.50

Tests and settings

3D games
Crysis Quality settings set to lowest, Object Detail to High, Physics to Very High, version 1.2.1, 1024x768, Benchmark tool, 3-run average
Left 4 Dead Quality settings set to lowest, 1024x768, version 1.0.1.1, timed demo.
World in Conflict Quality settings set to lowest, 1024x768, Patch 1.009, Built-in benchmark.
iTunes Version: 8.1.0.52, Audio CD ("Terminator II" SE), 53 min., Default format AAC
Lame MP3 Version: 3.98 (64-bit), Audio CD ""Terminator II" SE, 53 min, wave to MP3, 160 Kb/s
TMPEG 4.6 Version: 4.6.3.268, Import File: "Terminator II" SE DVD (5 Minutes), Resolution: 720x576 (PAL) 16:9
DivX 6.8.5 Encoding mode: Insane Quality, Enhanced Multi-Threading, Enabled using SSE4, Quarter-pixel search
XviD 1.2.1 Display encoding status=off
Main Concept Reference 1.6.1 MPEG2 to MPEG2 (H.264), MainConcept H.264/AVC Codec, 28 sec HDTV 1920x1080 (MPEG2), Audio: MPEG2 (44.1 KHz, 2 Channel, 16-Bit, 224 Kb/s), Mode: PAL (25 FPS), Profile: Tom's Hardware Settings for Qct-Core
Autodesk 3D Studio Max 2009 (64-bit) Version: 2009, Rendering Dragon Image at 1920x1080 (HDTV)
Adobe Photoshop CS3 Version: 10.0x20070321, Filtering from a 69 MB TIF-Photo, Benchmark: Tomshardware-Benchmark V1.0.0.4, Filters: Crosshatch, Glass, Sumi-e, Accented Edges, Angled Strokes, Sprayed Strokes
Grisoft AVG Antivirus 8 Version: 8.0.134, Virus base: 270.4.5/1533, Benchmark: Scan 334 MB Folder of ZIP/RAR compressed files
WinRAR 3.80 Version 3.80, Benchmark: THG-Workload (334 MB)
WinZip 12 Version 12, Compression=Best, Benchmark: THG-Workload (334 MB)
3DMark Vantage Version: 1.02, GPU and CPU scores
PCMark Vantage Version: 1.00, System, Memory, Hard Disk Drive benchmarks, Windows Media Player 10.00.00.3646
SiSoftware Sandra 2009 SP3 CPU Test=CPU Arithmetic/MultiMedia, Memory Test=Bandwidth Benchmark

Test results

Let's start with the results of synthetic tests, so that we can then evaluate how well they correspond to real tests. It is important to remember that synthetic tests are written with a future in mind, so they should be more responsive to changes in the number of cores than real applications.

We'll start with the 3DMark Vantage synthetic gaming performance test. We chose the "Entry" run, which 3DMark runs at the lowest resolution available to CPU performance had a stronger effect on the result.

The almost linear growth is quite interesting. The biggest increase is observed when moving from one core to two, but even then the scalability is quite noticeable. Now let's move on to the PCMark Vantage test, which is designed to show overall system performance.

PCMark results suggest that the end user will benefit from increasing the number of CPU cores to three, and the fourth core, on the contrary, will slightly reduce performance. Let's see what causes this result.

In the memory subsystem test, we again see the largest performance increase when moving from one CPU core to two.

The productivity test, it seems to us, has the greatest impact on the overall PCMark test result, since in in this case The performance increase ends at three cores. Let's see if the results of another synthetic test, SiSoft Sandra, are similar.

We'll start with SiSoft Sandra's arithmetic and multimedia tests.


Synthetic tests demonstrate a fairly linear increase in performance when moving from one CPU core to four. This test is written specifically to make efficient use of four cores, but we doubt real-world applications will see the same linear progression.

The Sandra memory test also suggests that three cores will give more memory bandwidth in iSSE2 integer buffered operations.

After synthetic tests, it's time to see what we get in application tests.

Audio encoding has traditionally been a segment where applications either did not benefit greatly from multiple cores or were not optimized by developers. Below are the results from Lame and iTunes.

Lame doesn't show much benefit when using multiple cores. Interestingly, we see a small performance increase with an even number of cores, which is quite strange. However, the difference is small, so it may simply be within the margin of error.

As for iTunes, we see a small performance boost after activating two cores, but more cores don't do anything.

It turns out that neither Lame nor iTunes are optimized for multiple CPU cores for audio encoding. On the other hand, as far as we know, video encoding programs are often highly optimized for multiple cores due to their inherently parallel nature. Let's look at the video encoding results.

We'll start our video encoding tests with the MainConcept Reference.

Notice how much the increase in the number of cores affects the result: encoding time decreases from nine minutes on a single-core 2.7 GHz Core processor 2 to just two minutes and 30 seconds when all four cores are active. It is quite clear that if you often transcode video, then it is better to take a processor with four cores.

Will we see similar benefits in TMPGEnc tests?

Here you can see the impact on the encoder's output. While the DivX encoder is highly optimized for multiple CPU cores, Xvid does not show such a noticeable advantage. However, even Xvid reduces encoding time by 25% when moving from one core to two.

Let's get started graphics tests with Adobe Photoshop.

As you can see, the CS3 version does not notice the addition of kernels. Strange result for such a popular program, although we admit that we did not use latest version Photoshop CS4. The results of CS3 are still not inspiring.

Let's take a look at the 3D rendering results in Autodesk 3ds Max.

It's quite obvious that Autodesk "loves" 3ds Max additional cores. This feature was present in 3ds Max even when the program was running in a DOS environment, since the 3D rendering task took so long to complete that it was necessary to distribute it across several computers on the network. Again, for similar programs It is highly desirable to use quad-core processors.

Test antivirus scanning very close to real life conditions, since almost everyone uses antiviruses.

AVG antivirus demonstrates a wonderful performance increase with increasing CPU cores. During an antivirus scan, computer performance can drop dramatically, and the results clearly show that multiple cores significantly reduce scan time.


WinZip and WinRAR do not provide noticeable gains on multiple cores. WinRAR demonstrates a performance increase on two cores, but nothing more. It will be interesting to see how the just released version 3.90 performs.

In 2005, when they began to appear desktop computers With two cores, there simply weren't any games that showed performance gains when moving from single-core CPUs to multi-core processors. But times have changed. How do multiple CPU cores affect modern games Oh? Let's launch a few popular games and we'll see. We ran gaming tests at a low resolution of 1024x768 and with low level graphical details to minimize the impact of the video card and determine how much the games are impacted by CPU performance.

Let's start with Crysis. We reduced all options to a minimum except for object detail, which we set to "High", and also Physics, which we set to "Very High". As a result, game performance should be more dependent on the CPU.

Crysis showed an impressive dependence on the number of CPU cores, which is quite surprising since we thought that it responded more to the performance of the video card. In any case, you can see that in Crysis single-core CPUs give frame rates half as high as with four cores (however, remember that if the game depends more on the performance of the video card, then the spread of results with different numbers of CPU cores will be smaller) . It's also interesting to note that Crysis can only use three cores, since adding a fourth doesn't make a noticeable difference.

But we know that Crysis uses physics calculations seriously, so let's see what the situation would be in a game with less advanced physics. For example, in Left 4 Dead.

Interestingly, Left 4 Dead shows a similar result, although the lion's share of the performance increase comes after adding a second core. There is a slight increase when moving to three cores, but this game does not require a fourth core. Interesting trend. Let's see how typical it will be for the real-time strategy World in Conflict.

The results are again similar, but we see a surprising feature - three CPU cores give slightly better performance than four. The difference is close to the margin of error, but this again confirms that the fourth core is not used in games.

It's time to draw conclusions. Since we received a lot of data, let's simplify the situation by calculating the average performance increase.

First, I would like to say that the results of synthetic tests are too optimistic when comparing the use of multiple cores with real applications. The performance gain for synthetic tests when moving from one core to several looks almost linear, with each new core adding 50% of performance.

In applications, we see more realistic progress - about 35% increase from the second CPU core, 15% increase from the third and 32% increase from the fourth. It's strange that when we add a third core, we only get half the benefit that the fourth core gives.

In applications, however, it is better to look at individual programs, and not on the overall result. Indeed, audio encoding applications, for example, do not benefit at all from increasing the number of cores. On the other hand, video encoding applications benefit greatly from more CPU cores, although this depends quite a lot on the encoder used. In the case of the 3D rendering program 3ds Max, we see that it is heavily optimized for multi-core environments, and 2D photo editing applications like Photoshop do not respond to the number of cores. AVG antivirus showed a significant increase in performance on several cores, but the gain on file compression utilities is not so big.

As for games, when moving from one core to two, performance increases by 60%, and after adding a third core to the system, we get another 25% gap. The fourth core does not provide any advantages in the games we selected. Of course, if we took more games, then the situation could change, but, in any case, the triple-core Phenom II X3 processors seem very attractive and inexpensive choice for the gamer. It's important to note that as you move to higher resolutions and add visual detail, the difference due to core count will be smaller, as the graphics card will become the deciding factor in frame rates.


Four cores.

With everything said and done, a number of conclusions can be drawn. Overall, you don't need to be any kind of professional user to benefit from the installation multi-core CPU. The situation has changed significantly compared to what it was four years ago. Of course, the difference does not seem so significant at first glance, but it is quite interesting to note how much applications have become optimized for multithreading in the last few years, especially those programs that can provide significant performance gains from this optimization. In fact, we can say that today there is no point in recommending single-core CPUs (if you can still find them), with the exception of low-power solutions.

In addition, there are applications for which users are advised to buy processors with as many cores as possible. Among them, we note video encoding programs, 3D rendering and optimized work applications, including antivirus software. As for gamers, gone are the days when a single-core processor with a powerful graphics card was enough.

* There are always pressing questions about what you should pay attention to when choosing a processor, so as not to make a mistake.

Our goal in this article is to describe all the factors affecting processor performance and other operational characteristics.

It's probably no secret that the processor is the main computing unit of a computer. You could even say – the most important part of the computer.

It is he who processes almost all processes and tasks that occur on the computer.

Be it watching videos, music, Internet surfing, writing and reading in memory, processing 3D and video, games. And much more.

Therefore, to choose C central P processor, you should treat it very carefully. It may turn out that you decide to install a powerful video card and a processor that does not correspond to its level. In this case, the processor will not reveal the potential of the video card, which will slow down its operation. The processor will be fully loaded and literally boiling, and the video card will wait its turn, working at 60-70% of its capabilities.

That is why, when choosing a balanced computer, Not costs neglect the processor in favor of a powerful video card. The processor power must be enough to unleash the potential of the video card, otherwise it’s just wasted money.

Intel vs. AMD

*catch up forever

Corporation Intel, has enormous human resources and almost inexhaustible finances. Many innovations in the semiconductor industry and new technologies come from this company. Processors and developments Intel, on average by 1-1,5 years ahead of the engineers' achievements AMD. But as you know, you have to pay for the opportunity to have the most modern technologies.

Processor pricing policy Intel, is based both on number of cores, amount of cache, but also on "freshness" of architecture, performance per clockwatt,chip process technology. The meaning of cache memory, “subtleties of the technical process” and others important characteristics The processor will be discussed below. For the possession of such technologies as well as a free frequency multiplier, you will also have to pay an additional amount.

Company AMD, unlike the company Intel, strives for the availability of its processors for the end consumer and for a competent pricing policy.

One could even say that AMD– « People's stamp" In its price tags you will find what you need at a very attractive price. Usually a year after the appearance new technology at the company Intel, an analogue of technology appears from AMD. If you don't chase after yourself high performance and pay more attention to the price tag than to the availability of advanced technologies, then the company’s products AMD– just for you.

Price policy AMD, is based more on the number of cores and very little on the amount of cache memory and the presence of architectural improvements. In some cases, for the opportunity to have third-level cache memory, you will have to pay a little extra ( Phenom has a 3 level cache memory, Athlon content with only limited, level 2). But sometimes AMD spoils his fans possibility to unlock more cheap processors, to more expensive ones. You can unlock the cores or cache memory. Improve Athlon before Phenom. This is possible thanks to the modular architecture and the lack of some cheaper models, AMD simply disables some blocks on the chip of more expensive ones (software).

Cores– remain practically unchanged, only their number differs (true for processors 2006-2011 years). Due to the modularity of its processors, the company does an excellent job of selling rejected chips, which, when some blocks are turned off, become a processor from a less productive line.

The company has been working for many years on a completely new architecture under the code name Bulldozer, but at the time of release in 2011 year, the new processors did not show the best performance. AMD sinned on operating systems that they do not understand architectural features dual cores and “other multi-threading.”

According to company representatives, you should wait for special fixes and patches to experience the full performance of these processors. However, at the beginning 2012 year, company representatives postponed the release of an update to support the architecture Bulldozer for the second half of the year.

Processor frequency, number of cores, multi-threading.

During times Pentium 4 and before him - CPU frequency, was the main processor performance factor when selecting a processor.

This is not surprising, because processor architectures were specially developed to achieve high frequencies, and this was especially reflected in the processor Pentium 4 on architecture NetBurst. High frequency was not effective with the long pipeline that was used in the architecture. Even Athlon XP frequency 2GHz, in terms of productivity was higher than Pentium 4 c 2.4 GHz. So it was pure marketing. After this error, the company Intel realized my mistakes and returned to the side of good I started working not on the frequency component, but on performance per clock. From architecture NetBurst I had to refuse.

What same for us gives multi-core?

Quad-core processor with frequency 2.4 GHz, in multi-threaded applications, will theoretically be the approximate equivalent of a single-core processor with a frequency 9.6 GHz or 2-core processor with frequency 4.8 GHz. But that's only in theory. Practically same, two dual core processor in a two-socket motherboard, will be faster than one 4-core, at the same operating frequency. Bus speed limitations and memory latency take their toll.

* subject to the same architecture and amount of cache memory

Multi-core makes it possible to perform instructions and calculations in parts. For example, you need to perform three arithmetic operations. The first two are executed on each of the processor cores and the results are added to cache memory, where they can be executed next action any of the free nuclei. The system is very flexible, but without proper optimization it may not work. Therefore, optimization for multi-cores is very important for processor architecture in an OS environment.

Applications that "love" and use multithreading: archivers, video players and encoders, antiviruses, defragmenter programs, graphic editor , browsers, Flash.

Also, “lovers” of multithreading include such operating systems as Windows 7 And Windows Vista, as well as many OS kernel based Linux, which work noticeably faster with a multi-core processor.

Most games, sometimes a 2-core processor at a high frequency is quite enough. Now, however, more and more games are being released that are designed for multi-threading. Take at least these SandBox games like GTA 4 or Prototype, in which on a 2-core processor with a frequency lower 2.6 GHz– you don’t feel comfortable, the frame rate drops below 30 frames per second. Although in this case, most likely the reason for such incidents is “weak” optimization of games, lack of time or “indirect” hands of those who transferred games from consoles to PC.

When buying a new processor for gaming, you should now pay attention to processors with 4 or more cores. But still, you should not neglect 2-core processors from the “upper category”. In some games, these processors sometimes feel better than some multi-core ones.

Processor cache memory.

is a dedicated area of ​​the processor chip in which intermediate data between processor cores, RAM and other buses is processed and stored.

It operates at a very high clock frequency (usually at the frequency of the processor itself), has a very high throughput and processor cores work with it directly ( L1).

Because of her shortage, the processor can be idle in time-consuming tasks, waiting for new data to arrive in the cache for processing. Also cache memory serves for records of frequently repeated data that, if necessary, can be quickly restored without unnecessary calculations, without forcing the processor to waste time on them again.

Performance is also enhanced by the fact that the cache memory is unified, and all cores can equally use data from it. This gives additional features for multithreaded optimization.

This technique is now used for Level 3 cache. For processors Intel there were processors with unified level 2 cache memory ( C2D E 7***,E 8***), thanks to which this method appeared to increase multi-threaded performance.

When overclocking a processor, the cache memory can become a weak point, preventing the processor from being overclocked beyond its maximum operating frequency without errors. However, the plus is that it will run at the same frequency as the overclocked processor.

In general, the larger the cache memory, the faster CPU. In which applications exactly?

All applications that use a lot of floating point data, instructions, and threads make heavy use of the cache memory. Cache memory is very popular archivers, video encoders, antiviruses And graphic editor etc.

A large amount of cache memory is favorable games. Especially strategies, auto-simulators, RPGs, SandBox and all games where there are a lot of small details, particles, geometry elements, information flows and physical effects.

Cache memory plays a very important role in unlocking the potential of systems with 2 or more video cards. After all, some part of the load falls on the interaction of processor cores, both among themselves and for working with streams of several video chips. It is in this case that the organization of cache memory is important, and a large level 3 cache memory is very useful.

Cache memory is always equipped with protection against possible errors (ECC), if detected, they are corrected. This is very important, because a small error in the memory cache, when processed, can turn into a gigantic, continuous error that will crash the entire system.

Proprietary technologies.

(hyper-threading, HT)–

the technology was first used in processors Pentium 4, but it didn’t always work correctly and often slowed down the processor more than it speeded it up. The reason was that the pipeline was too long and the branch prediction system was not fully developed. Used by the company Intel, there are no analogues of the technology yet, unless we consider it an analogue? what the company’s engineers implemented AMD in architecture Bulldozer.

The principle of the system is that for each physical core, one two computing threads, instead of one. That is, if you have a 4-core processor with HT (Core i 7), then you have virtual threads 8 .

The performance gain is achieved due to the fact that data can enter the pipeline already in the middle of it, and not necessarily at the beginning. If some processor blocks capable of performing this action are idle, they receive the task for execution. The performance gain is not the same as that of real physical cores, but comparable (~50-75%, depending on the type of application). It is quite rare that in some applications, HT negatively affects for performance. This is due to poor optimization of applications for this technology, the inability to understand that there are “virtual” threads and the lack of limiters for the load of threads evenly.

TurboBoost - Very useful technology, which increases the operating frequency of the most used processor cores, depending on their load level. It is very useful when the application does not know how to use all 4 cores and loads only one or two, while their operating frequency increases, which partially compensates for performance. The company has an analogue of this technology AMD, is technology Turbo Core.

, 3 dnow! instructions. Designed to speed up the processor in multimedia computing (video, music, 2D/3D graphics, etc.), and also speed up the work of programs such as archivers, programs for working with images and video (with the support of instructions from these programs).

3dnow! - enough old technology AMD, which contains additional instructions for processing multimedia content, in addition to SSE first version.

*Namely the ability to stream processing real numbers single precision.

Having the latest version is a big plus; the processor begins to perform certain tasks more efficiently with proper software optimization. Processors AMD have similar names, but slightly different.

* Example - SSE 4.1(Intel) - SSE 4A(AMD).

In addition, these instruction sets are not identical. These are analogues with slight differences.

Cool'n'Quiet, SpeedStep CoolCore Enchanted Half State(C1E) AndT. d.

These technologies, at low loads, reduce the processor frequency by reducing the multiplier and core voltage, disabling part of the cache, etc. This allows the processor to heat up much less, consume less energy, and make less noise. If power is needed, the processor will return to its normal state in a split second. On standard settings Bios They are almost always turned on; if desired, they can be disabled to reduce possible “freezes” when switching in 3D games.

Some of these technologies control the rotation speed of fans in the system. For example, if the processor does not need increased heat dissipation and is not loaded, the processor fan speed is reduced ( AMD Cool'n'Quiet, Intel Speed ​​Step).

Intel Virtualization Technology And AMD Virtualization.

These hardware technologies allow you to use special programs run several operating systems immediately, without any severe loss in productivity. Also, it is used for proper operation servers, because often more than one OS is installed on them.

Execute Disable Bit AndNo eXecute Bit technology designed to protect your computer from virus attacks and software errors, which can cause the system to crash through buffer overflow.

Intel 64 , AMD 64 , EM 64 T – this technology allows the processor to work both in an OS with a 32-bit architecture and in an OS with a 64-bit architecture. System 64 bit– from the point of view of benefits, for the average user it differs in that more than 3.25GB can be used in this system random access memory. On 32-bit systems, use b O A larger amount of RAM is not possible due to the limited amount of addressable memory*.

Most applications with 32-bit architecture can be run on a system with a 64-bit OS.

* What can you do if back in 1985, no one could even think about such gigantic, by the standards of that time, volumes of RAM.

Additionally.

A few words about.

This point is worth paying close attention to. The thinner the technical process, the less energy the processor consumes and, as a result, the less it heats up. And among other things, it has a higher safety margin for overclocking.

The more refined the technical process, the more you can “wrap” into a chip (and not only) and increase the capabilities of the processor. Heat dissipation and power consumption are also reduced proportionally, due to lower current losses and a reduction in core area. You can notice a tendency that with each new generation of the same architecture on a new technological process, energy consumption also increases, but this is not the case. It’s just that manufacturers are moving towards even higher performance and are stepping beyond the heat dissipation line of the previous generation of processors due to an increase in the number of transistors, which is not proportional to the reduction in the technical process.

Built into the processor.

If you don't need a built-in video core, then you shouldn't buy a processor with it. You will only get worse heat dissipation, extra heating (not always), worse overclocking potential (not always), and overpaid money.

In addition, those cores that are built into the processor are only suitable for loading the OS, surfing the Internet and watching videos (and not of any quality).

Market trends are still changing and the opportunity to buy a powerful processor from Intel Without a video core, it drops out less and less. The policy of forced imposition of the built-in video core appeared with processors Intel under the code name Sandy Bridge , the main innovation of which was the built-in core on the same technical process. The video core is located together with processor on one chip, and not as simple as in previous generations of processors Intel. For those who do not use it, there are disadvantages in the form of some overpayment for the processor, the displacement of the heating source relative to the center of the heat distribution cover. However, there are also advantages. Disabled video core, can be used for very fast video encoding technology Quick Sync coupled with special software that supports this technology. In future, Intel promises to expand the horizons of using the built-in video core for parallel computing.

Sockets for processors. Platform lifespan.


Intel has harsh policies for its platforms. The lifespan of each (the start and end dates of processor sales for it) usually does not exceed 1.5 - 2 years. In addition, the company has several parallel developing platforms.

Company AMD, has the opposite policy of compatibility. On her platform on AM 3, all future generation processors that support DDR3. Even when the platform reaches AM 3+ and later, either new processors for AM 3, or new processors will be compatible with old ones motherboards, and you can make a painless upgrade for your wallet by changing only the processor (without changing the motherboard, RAM, etc.) and flashing the motherboard. The only nuances of incompatibility may arise when changing the type, since a different memory controller built into the processor will be required. So compatibility is limited and not supported by all motherboards. But in general, for the budget-conscious user or those who are not used to completely changing the platform every 2 years, the choice of processor manufacturer is clear - this AMD.

CPU cooling.

Comes standard with processor BOX-a new cooler that will simply cope with its task. It is a piece of aluminum with a not very high dispersion area. Efficient coolers with heat pipes and plates attached to them are designed for highly efficient heat dissipation. If you do not want to hear extra noise from the fan, then you should purchase an alternative, more efficient cooler with heat pipes, or a system liquid cooling closed or not closed type. Such cooling systems will additionally provide the ability to overclock the processor.

Conclusion.

All important aspects affecting the performance and performance of the processor have been considered. Let's repeat what you should pay attention to:

  • Select manufacturer
  • Processor architecture
  • Technical process
  • CPU frequency
  • Number of processor cores
  • Processor cache size and type
  • Technology and instruction support
  • High-quality cooling

We hope this material will help you understand and decide on choosing a processor that meets your expectations.

Many players mistakenly consider a powerful video card to be the main thing in games, but this is not entirely true. Of course, many graphic settings do not affect the CPU in any way, but only affect graphics card, but this does not change the fact that the processor is not used in any way during the game. In this article we will take a detailed look at the principle of operation of the CPU in games, we will tell you why it is necessary powerful device and its influence in games.

As you know, the CPU transmits commands from external devices into the system, performs operations and transfers data. The speed of execution of operations depends on the number of cores and other characteristics of the processor. All its functions are actively used when you turn on any game. Let's take a closer look at a few simple examples:

Processing user commands

Almost all games somehow use external connected peripherals, be it a keyboard or a mouse. They control vehicles, characters or certain objects. The processor receives commands from the player and transmits them to the program itself, where the programmed action is carried out almost without delay.

This task is one of the largest and most complex. Therefore, there is often a delay in response when moving if the game does not have enough processor power. This does not affect the number of frames in any way, but it is almost impossible to control.

Generating random objects

Many items in games do not always appear in the same place. Let's take as an example the usual garbage in the game GTA 5. The game engine, using the processor, decides to generate an object in certain time at the specified location.

That is, objects are not random at all, but they are created according to certain algorithms thanks to the computing power of the processor. In addition, it is worth considering the presence of a large number of different random objects; the engine transmits instructions to the processor what exactly needs to be generated. This means that a more diverse world with big amount non-persistent objects requires CPU high power to generate what is needed.

NPC behavior

let's consider this parameter Using the example of open-world games, this will work out more clearly. NPCs are all characters that are not controlled by the player; they are programmed to do certain things when certain stimuli appear. For example, if you open fire from a weapon in GTA 5, the crowd will simply scatter in different directions; they will not perform individual actions, because this requires a large amount of processor resources.

In addition, in open-world games, random events never occur that the main character does not see. For example, on a sports ground, no one will play football if you don’t see it and are standing around the corner. Everything revolves only around the main character. The engine will not do anything that we cannot see due to its location in the game.

Objects and environment

The processor needs to calculate the distance to objects, their beginning and end, generate all the data and transfer it to the video card for display. A separate task is the calculation of contacting objects, this requires additional resources. Next, the video card gets to work with the built environment and finalizes small details. Due to weak CPU power, games sometimes do not work full load objects, the road disappears, buildings remain boxes. In some cases, the game simply stops for a while to generate the environment.

Then everything depends only on the engine. In some games, the deformation of cars and the simulation of wind, fur and grass are performed by video cards. This significantly reduces the load on the processor. Sometimes it happens that these actions need to be performed by the processor, which is why frame drops and freezes occur. If particles: sparks, flashes, water sparkles are executed by the CPU, then most likely they have a certain algorithm. The fragments from a broken window always fall the same way, and so on.

What settings in games affect the processor?

Let's look at a few modern games and find out which graphics settings affect the processor. Four games developed on our own engines will participate in the tests, this will help make the test more objective. To make the tests as objective as possible, we used a video card that these games did not load 100%, this will make the tests more objective. We will measure changes in the same scenes using the overlay from the program.

GTA 5

Changing the number of particles, texture quality and lowering the resolution do not improve CPU performance in any way. The increase in frames is visible only after reducing the population and rendering distance to a minimum. There is no need to change all settings to a minimum, since in GTA 5 almost all processes are taken over by the video card.

By reducing the population, we have reduced the number of objects with complex logic, and reduced the drawing distance total number displayed objects that we see in the game. That is, now buildings do not take on the appearance of boxes when we are away from them, the buildings are simply absent.

Watch Dogs 2

Post-processing effects such as depth of field, blur, and cross-section did not increase the number of frames per second. However, we got a slight increase after lowering the shadow and particle settings.

In addition, a slight improvement in the smoothness of the picture was obtained after lowering the relief and geometry to minimum values. Reducing the screen resolution did not give any positive results. If you reduce all the values ​​to the minimum, you will get exactly the same effect as lowering the shadow and particle settings, so there is not much point in this.

Crysis 3

Crysis 3 is still one of the most demanding computer games. It was developed on its own engine, so it is worth taking into account that the settings that affected the smoothness of the image may not give the same result in other games.

Minimum settings for objects and particles have significantly increased the minimum FPS indicator, however, drawdowns were still present. In addition, performance in the game was affected after reducing the quality of shadows and water. Reducing all graphics parameters to the minimum helped to get rid of sudden drawdowns, but this had virtually no effect on the smoothness of the picture.

It largely depends on the number of cores it includes. Therefore, many users are interested in how to find out the number of processor cores. If you are also interested in this issue, then this article should help you.

How to find out the number of cores in a processor using Windows

The easiest way to find out the number of cores in a processor is to look at the processor model and then look on the Internet to see what it is equipped with. To do this, open the “View basic information about your computer” window. This window can be opened in several ways:

  • Open the Start menu and go to " ". After that, open the “System and Security” section, and then the “System” subsection;
  • Click right click Click on the “My Computer” icon and select “Properties”.
  • Or just press the Win+Break key combination;

After opening this window, pay attention to.

Enter the title of this processor into the search engine and go to the manufacturer’s official website.

This will take you to a page with . Here you need to find information about the number of cores.

If you have Windows 8 or Windows 10, then you can find out the number of processor cores (key combination CTRL-SHIFT-ESC) in the “Performance” tab.

On Windows 7 and older Windows versions, information about the number of cores is not displayed in the Task Manager. Instead, it displays a separate load graph for each core. If you have an AMD processor, then the number of such graphs will be equal to the number of cores.

But, if you have an Intel processor, then the number of graphics cannot be trusted, since the processor may use Hyper-threading technology, which doubles the actual number of cores.

How to find out the number of processor cores using special programs

You can also use special programs to view the characteristics of your computer. In this case, it would be best to CPU-Z program. Run this program on your computer and look at the “Cores” value, which is displayed at the bottom of the window on the “CPU” tab.

This value corresponds to the number of cores in your processor.