Intel integrated graphics performance. Integrated Intel HD Graphics

Part 24: Third and fourth generation Intel HD Graphics

It so happened that we got acquainted with the performance of the current generation of integrated Intel graphics from the example of its older modifications or in laptop versions, but the last article, which studied Celeron, Pentium and Core i3, was published more than a year ago, so it was limited to Sandy Bridge and Ivy Bridge . From the point of view of a potential buyer, of course, this situation is wrong. After all, the integrated graphics core in a top-end desktop processor is usually used by those for whom its characteristics are not important, so, by and large, the HDG 2500 is sufficient. If that’s not enough, then you usually just buy a discrete video card, especially since owners of computers with Core i7 or Core i5 can easily afford not to save on the latter. And even in older laptop models, manufacturers often install discrete data on the “just so it is” principle. Even though this often turns out to be a GPU, comparable in performance to the built-in one, it is not always possible to fight off such “concerns”.

But in the budget segment everything is completely different. Of course, you can build a good gaming computer on a Pentium (not to mention a Core i3). Moreover, if we limit ourselves to the single-user mode, then it’s not even “not bad”, but good (as we have already seen). However, with serious performance requirements, you usually have to purchase expensive video cards, without saving on other systems, so here you can no longer save too much on the processor (especially since, as we have written more than once, at the moment all processors in the consumer segment are very inexpensive ). Who needs the cheapest models? Mainly for those who have to save every dollar (and even more often, a ruble or hryvnia), so buying a decent discrete video card is not even considered (or is considered, but somewhere in the future). Nowadays, as has been shown more than once, there is no point in purchasing “indecent” ones at all - wasted money, which still will not allow you to get a qualitative advantage over the use of integrated graphics. But in this case, the characteristics of the latter may begin to have a decisive significance - simply because in interactive applications (which include games) quantitative characteristics result in completely qualitative differences. In other words, it doesn’t make much difference how many minutes it will ultimately take to import a large number of images into the database or process a large number of images: of course, 15 minutes is better than 30, but in the end the work will be done (even if you have to drink an extra cup of coffee or look for something else). -occupation). At the same time, 15 (and even 20-25) and 30 frames per second in the game are already qualitative differences: in the second case, the game can be played with the selected settings, but in the first, not yet. In general, the question is fundamental. So the answer to this is of interest to many. Today we will look for him.

Testing: goals and objectives, configurations, methodology

This section of a relatively large volume will be common and the same for all articles: unfortunately, it is not enough for all people to explain something once :) Moreover, not all readers will carefully study all the articles in the series - the likelihood of “starting from the middle” or simply limiting ourselves to one or two materials is extremely large, of which we are fully aware. Therefore, we immediately apologize to those who are against the constant repetition of the same truths. Which, by the way, is known to be the mother of learning :)

So, first and foremost, it should be taken into account that within of this testing We do not deal exclusively with components - we test the systems that consist of them. Processors are tested separately as part of the “main line” articles. Always in a fixed configuration - with a powerful video card, a large amount of RAM, etc. On our website we also test video cards directly in gaming applications, updated monthly. As part of i3D-Speed, all video cards (from simple budget to multi-GPU) are tested on a powerful configuration, chosen to be sufficient for the graphics component of any power. That is, we believe that from the point of view of traditional “component” testing, these two lines of articles are quite sufficient.

But for the practical use of the results obtained within their framework, a certain connecting link is needed. The fact is that applications whose performance does not depend on the central processor do not exist in nature. There are, of course, cases when it is limited by other components, but this very often happens on different processors. at different levels. Gaming and similar applications significantly depend on the performance of the GPU, but they also place a considerable load on the CPU. If the task turns out to be too “easy” for graphics, only the processor begins to determine everything. If “heavy”, then the influence of the processor, on the contrary, becomes minimal, and sometimes it can even be ignored. Between these extreme cases, both components are important, and the degree of their importance may change places. A priori in an unknown way. That is, just because one processor is faster than another using a powerful video card, it does not follow that the ratio will remain the same if it is replaced with a budget one. More precisely, in some modes it will remain the same, in others it will change, in others everything will simply turn out to be the same. A similar problem is typical for video cards - the level of CPU “sufficiency” varies depending on the GPU and its operating mode.

It would seem that it would be enough to simply test all “processor + video” combinations. The solution is obvious and correct in theory, but practically impossible in practice, since the amount of work grows exponentially. In other words, 40 video cards on one system - 40 test configurations. 40 processors with one video card - also 40 configurations. And if you combine this, you get 1600 test configurations. Although, of course, if all this work can be done, truly invaluable results will be obtained. But by the time they are received, they will no longer be needed by anyone, since they will become outdated (looking ahead - even the “simplified” method we have chosen allows us to test no more than a dozen configurations in a working week, so 1600 is a task for three years when using one stand).

But you can approach it from the other side: do not try to find exact answers to all questions, but limit yourself to qualitative assessments. At least for some processors you can try to “grope” for the lower level of performance. Which is integrated graphics, since recently it has become an integral part of most modern processors. And there are younger models of discrete adapters that are at least no worse. But it is many times simpler and slower than top solutions - the range of characteristics in the graphics market is still greater than in the processor market. With this choice of equipment, we can significantly reduce the list of test configurations and modes. Indeed, the most relevant results will be for buyers of budget computers, since with the price of a system unit at $1000, you can pay 10% of this amount for a slightly more powerful video card than the lower level, and not deal with the same integrated video. Just to make it happen. So there is no need to frequently test mid-class and higher processors with weak video. Sometimes, of course, we will do this too - in order to have the necessary guidelines, but only sometimes. In addition, systems of this class do not require tests in any outstanding modes, such as 2560 x 1600 with older variations on the theme of full-screen anti-aliasing :) In short, the work can be significantly simplified.

What reduces the workload even further is that 90% of standard processor-based applications do not depend on video performance at all. In the previous series we used all the programs, so its four parts are quite sufficient proof of this fact. For those who still don’t have enough - there’s nothing we can do about it :) Be that as it may, GPGPU is still nothing more than an interesting experiment, and all the work in this direction shows that for systems with weak GPUs it is generally special no different in relevance: powerful video cards on “good” tasks are really capable of speeding up something, but when trying to squeeze something worthwhile out of entry-level discrete data, very often all the steam goes into the whistle- the complication of algorithms and unnecessary data transfers “eat up” all the potential growth. From which, however, we should not conclude that we will pass by any interesting and popular application that can actively use GPU resources. Of course, we won’t go through with it and will add it to this experimental technique. But so far the main problem is that nothing like this comes across. More precisely, “interesting” programs already exist, but popular For one reason or another, they still don’t become. The same video transcoding, around which many copies have been broken, in fact, few people need regularly, and the quality of work of programs developed by enthusiasts leaves much to be desired (to put it mildly). Moreover (here is the grimace of fate) it is most quickly performed using specialized hardware units available in integrated Intel GPUs, and not at all on universal-purpose conveyors.

Thus, we are left with not many programs that make sense to run on systems with weak graphics. In fact, the “standard” technique is simplified to literally five groups, three of which are experimental. These are: Interactive work in three-dimensional packages No changes Mathematical and engineering calculations MAPLE and MATLAB have been thrown out, since they do not display anything on the screen, but the remaining three applications are interesting to readers, judging by the reviews (it is clear that it is hardly advisable to save so much on the workplace, but suddenly you have to work on a weak computer). In fact, it turns out that the composition of these two groups ends up being the same, but in the previous case the “graphics” score of the corresponding test is taken into account, and in this case the “processor” score is taken into account: as testing practice has shown, in fact both of them depend on both the processor and video cards, which is what we needGames Without changesGames with low resolution and quality settings Within the framework of the “main” methodology, this group is practically not used in any way and on total score does not affect, but it was made specifically for systems with weak graphics. First of all, mobile phones, but they are not so different from what we are testing in this seriesPlaying videos high definition Doesn't need any special comments

Since we don’t have many groups, and they are all quite specific, we will not give a general rating. First of all, we are interested in the results. Which, as usual, will be fully compatible with those obtained on the configurations of the main line of testing, since we already know for sure that video cards do not affect other applications in any way. So, if you wish, you can simply replace the corresponding piece in the “big” table, fortunately we do not hide them in any way. However, it is worth considering that the scores of this test are in no way compatible with the main line: here, as a scale unit, we take a system with a Celeron G540 and Radeon HD 6450 512 MB GDDR3, so for independent machinations you should download a table in Microsoft Excel format, in which all the results are given both converted into points and in “natural” form.

Test bench configuration

CPUPentium G2140Pentium G3430Core i3-3245Core i3-4130Core i3-3250Core i3-4330
Kernel nameIvy Bridge DCHaswell DCIvy Bridge DCHaswell DCIvy Bridge DCHaswell DC
Number of cores/threads2/2 2/2 2/4 2/4 2/4 2/4
Core frequency, GHz3,3 3,3 3,4 3,4 3,5 3,5
L3 cache, MiB3 3 3 3 3 4
RAM2×DDR3-1600
Video coreHDGHDGHDG 4000HDG 4400HDG 2500HDG 4600
24 40 64 80 24 80
Video frequency (std/max), MHz650/1050 350/1100 650/1050 350/1150 650/1050 350/1150
TDP, W55 53 55 54 55 54

Desktop Celerons based on the Haswell microarchitecture were announced recently and have not yet reached our hands, but Bay Trail is a completely different story: only the BGA design and TDP up to 10 W make these models, at most, competitors of CULV processors, but not “standard desktop” platforms. But Pentium and Core i3 of various modifications are widely available for both LGA1155 and the new LGA1150. Accordingly, three pairs of processors will take part in our testing - two Pentiums and four Core i3s. With Pentium, everything is simple - we took two processors with equal clock speeds of computing cores: the old G2140 and the new G3430. Please note that the graphics core of the younger models is still called HD Graphics, although this is already the fourth GPU with this name, and it differs from the previous two not only architecturally, but also the number of pipelines has increased from 6 to 10. That is, the difference with Ivy Bridge will be definitely, but there is nothing to compare with the Pentium and Celeron on Sandy Bridge that are still on sale on Sandy Bridge - the functionality is very different, which we already noted a little over a year ago.

There is no confusion with names in the Core i3 family. Moreover, there is more order in general - previously the company offered both processors with the HDG 2500 core (the most widespread in desktop Ivy Bridges) and several modifications with the HDG 4000. At the same time, equality of selling prices was ensured, but the frequency of the computing cores was always higher (with this condition) for models with a low-end graphics core. The new generation was divided into two families. The heirs of the old Core i3 are models of the 41x0 line, similar to them in frequencies and cache memory capacity and equipped with an HDG 4440. A relatively new product is the more expensive processors of the 43x0 line, which on board not only the oldest among the “socket” GPU processors HDG 4600, but and all 4 MiB of L3 cache are used: like in the first generation Core i3 or in mobile dual-core Core i7. In general, the positioning of new processors has become simpler and more logical: we pay more, we get more. In all respects. There are also overlaps in clock frequency with the previous generation, which gave us two equal pairs 3245-4130 and 3250-4330.

CPUA6-6400KA8-6600K
Kernel nameRichlandRichland
Number of modules/threads1/2 2/4
Core frequency (std/max), GHz3,9/4,1 3,9/4,2
L3 cache, MiB- -
RAM2×DDR3-18662×DDR3-1866
Video coreRadeon HD 8470DRadeon HD 8570D
Number of GPUs192 256
Video frequency (std/max), MHz800 844
TDP, W65 100

The fourth pair of test participants are AMD APUs. Cheaper than Intel processors, but... As it was found out earlier, the graphics Performance Core The i7-3225 (with HDG 4000) was approximately equivalent only to the A4 of the Trinity line. The latter has already been replaced in the younger segment by Richland (the A8 on Kaveri will have to wait a little longer) with a slight increase in productivity. Intel's growth was more significant, but even the company's top desktop model in the summer could not reach the level of the modern A8. Since then, the drivers have been updated, which led to some interesting effects, but we were still a priori confident that the A8 would remain unattainable for lower-end Intel processors. The only question is - how much? And how does the graphics performance compare to the more affordable A6s? But the A4 is uninteresting: as mentioned above, this level of graphics performance was already available with the old Core i3. It may be noticeably more expensive, but the performance of the processor component is also very different, so you just need to choose what is more important. We hope that today's testing will make this task easier.

Another guest from another world is a video card based on the GeForce GT 630. We already tested something with this name a year ago, but exactly what with the name: the old products were based on the GF108, and the new ones use the GK208 chip. NVIDIA itself claims that this is a new development, but in fact the GPU is very similar to the trimmed GK107 (previously used in the GT 640 and higher). Moreover, trimmed in a programmatic way - both have the same area and partially matching wiring. Why partially? Because the GK208 lacks one memory channel, and the bus interface is only PCIe x8, not x16. Thus, it is obvious that at comparable frequencies the GT 630 is not a competitor to the old GT 640, despite the same number of GPUs. But compared to the old GT 630 DDR3, everything should not be so bad: the “narrow” memory bus is partially compensated by its higher clock frequency (1800 MHz versus the official 1600 MHz, which in real products often dropped to 1400 MHz), and the arithmetic capabilities of the chip are much higher - at the level of GT 640. Another question is whether such a level is needed in a modern computer or is it better to get by with integrated video? :) But, what is important, cards based on GK208 are compact and entirely equipped with passive cooling (because the GPU does not heat up very well), and in price they can compete with GT 610 / 620, which have absolutely no performance. In general, these solutions have a certain niche - at least an upgrade of old compact systems. Well, we will determine the exact level of performance using a card from ASUS with 2 GB DDR3 (we did not test the 1 GB modification for no reason - different capacities in video cards of this level will not affect anything), working in conjunction with a Core i3-4330 (so that certainly the processor did not interfere).

Interactive work in 3D packages

As we already wrote, in driver version 9.18.10.3257, Intel programmers corrected another batch of errors, which led to a curious effect: even Pentium on Ivy Bridge (adding 20% ​​to last year’s results) is already reaching the level of any AMD APU (with the possible exception of , Kaveri, but these models are just starting to arrive in retail chains). Moreover, this is the level of low-end gaming discrete chips from NVIDIA, even when paired with a faster processor. All in all, there's no need to be afraid of integrated Intel graphics anymore. Especially after the release of Haswell - it's even more high level productivity. Moreover, as we see, installing a low-level gaming disc (which was practically mandatory for such programs in the days of Sandy Bridge) significantly reduces performance, i.e. it’s better not to do this.

Mathematical and engineering calculations

Here and before, HD Graphics wasn't too much of a problem, since the results were mainly dependent on the single-threaded performance of the processor, which put Intel devices in an advantageous position, and now the situation has only worsened. But, by the way, pay attention - a discrete video card allows you to improve the results. Simply because it does not claim either the processor cache memory or the thermal package. However, the gain is extremely small, which, together with the decrease in the “graphics” score, does not allow us to change the conclusion - if you buy a discrete video card for professional programs, then certainly not a low-end gaming one.

Aliens vs. Predator

As you would expect, the third generation HDG and HDG 2500 are identical - we will see this more than once, so in the future we will not dwell on this result in detail. 4400 is only slightly faster than 4000, which is excusable - one of the younger solutions against the once older one. The HDG 4600 almost reaches the performance of the A6 - a noticeable step forward because, as we have already said, the HDG 4000 was only enough to fight the A4. And the difference between the two HDGs is even greater. Although in practice, in this mode, everything breaks down in that even the A8-6600K (faster than the GT 630, by the way) is still not enough to obtain a comfortable frame rate. Therefore, the settings will have to be reduced.

At minimum, of course, everything flies. Except for the low-end graphics configuration Ivy Bridge - even in this mode it was barely enough to exceed the 30 FPS limit. So it’s good that the new graphics at least don’t have such problems. And even only the Pentium lags behind the discrete level of the GT 630, and even then only slightly, and installing such cards in a computer based on any new Core i3 is definitely a bad idea. Well, APU is ahead by a wide margin from others. The result was not unexpected, although there were hopes for at least approximate parity between the older Core i3 and at least the much cheaper A6. We once saw lower results even from very old A8s, of course, but AMD engineers and programmers also haven’t been idle for the last year :)

Batman: Arkham Asylum GOTY Edition

The high-quality (as part of our testing) mode of this game “succumbed” to the integrated graphics from Intel after the appearance of the HDG 4000, and the company’s newer GPUs are, naturally, even faster. And even the Pentium was not enough to reach 30 FPS. An achievement that, however, pales in comparison to the fact that even the old A4-5300 or the very ancient A6-3500 is still faster - AMD has set a high bar, you can’t say anything. Actually, there is nothing surprising in the fact that the APUs of this company are already pushing out the low-end discrete devices from the market. And Intel, despite its rapid progress, lower pipe and thinner smoke:) However, it is also already clear that it no longer makes sense to install solutions of the GT 630 class (especially lower ones) into systems based on its new processors - there will be no fundamental increase in performance.

With low picture quality and an old graphics engine, the result is mostly a comparison of processors. With minor variations: after all, the HDG 2500 (and its relatives in budget families) is too weak a solution, and the use of discrete data interferes less with the processor component working at full capacity. But in general, it was possible to play in this mode even on a Celeron G555, and progress since its appearance allows us to no longer limit ourselves so much.

Crysis: Warhead x64

An example of the opposite situation - so far no integrated graphics solution can handle this game with the selected settings. Moreover, as we see, despite the steady increase in productivity, it is unlikely that anything will change much in the coming year. Which is not surprising, since even a discrete Radeon HD 7750 DDR5 is enough for this with virtually no speed reserve. But if we evaluate not only the absolute results themselves, but the dynamics of their growth, the assessment of the situation changes somewhat. As you can see, modern Pentiums have already reached a level that just a year ago was available only to some modifications of the Core i3. And the older representatives of the latest graphics core performance are now at the level of the A6 family APU or... Discrete video cards of not so long ago, such as the Radeon HD 6670 DDR3. Or quite modern GeForce GT 630. That is, the boundary between the older (and not even the oldest) models of integrated GPUs and the younger discrete ones is increasingly blurred.

As soon as the picture quality is reduced to the level of games from ten years ago, it immediately turns out that anything will suffice, which quite correlates with “worldly wisdom.” But such modes are also not very indicative, of course, but as we have said more than once, they were chosen at one time in an attempt to force graphics of lower classes to provide acceptable performance - for example, integrated into low-power Celerons from three years ago. However, some interesting information It is possible to “squeeze” out of them even now. In particular, the progress of Intel drivers is clearly visible - a little over a year ago the Pentium G2120 here produced less than 50 frames per second, and with the new drivers the G2140 has become one and a half times faster. However, this is not enough to keep up with at least the cheap AMD A6, but the new Pentium in games with simple graphics (either initially simple or simplified settings) can already “butt heads” with the A8. And, again, the only advantage of a weak discrete is that it does not prevent the processor part from giving its all. Although the effect of this when using inexpensive video cards, as we see, cannot be called significant.

F1 2010

Although the game will soon be four years old, it is still a tough nut to crack for integrated graphics. But it’s a little different than Crysis - if there the entire load fell on the GPU, then the performance of the processor is also important here, and it is desirable for the latter to support more than two computation threads. As a result, most low-end solutions are kept at 12.5 FPS thanks to the engine itself - it tries, if possible, not to “fall” lower by further simplifying the picture. HDG 4000 and higher, as well as integrated Radeon HD, work “honestly,” but are still too slow. And no wonder - as we already know, only top-end A10s can handle this mode more or less. And it’s even better to use discrete data as before. Preferably at least Radeon HD 7730 DDR5 or higher.

In easy mode, even with weak graphics, the shortcomings of dual-threaded processors are visible. However, once again this is most noticeable when using AMD processors, but with Intel difference between Pentium and Core i3 is small (and the new Pentium may overtake the old i3). Therefore, something of class A8 should be considered a minimum. Or buy a discrete video card - the specificity of the EGO engines (used in the entire Formula One series) is such that even a decrease in graphics quality does not make it useless.

Far Cry 2

Far Cry 2 is even older, so only Intel and AMD A4/A6 processors cannot cope with the task even in high-quality mode. In general, the qualitative difference between Intel HD Graphics and an APU or lower discrete system - as we see, it still remains, despite a very noticeable increase in performance in the new generation of GPUs.

But for easy mode All that was missing was Sandy Bridge, and in the case of more modern devices we get almost testing the performance of the processors themselves. With a completely predictable result.

Metro 2033

In fact, another stress test for integrated graphics - it won't be possible to get something more or less acceptable from it for a long time. But for assessing the actual performance of the GPU, it is well suited. However, there is almost nothing new for us here, with the exception of perhaps the most noticeable difference between the two generations of Intel IGPs - Haswell really was a big step forward, allowing the company to almost catch up with integrated Radeons. More precisely, the HDG 4000 could already compete with the A4, which, however, was not an achievement - too low a level for relatively expensive solutions. But the approximate parity with the A8 is all right. In theory, of course - in practice, as we already know, even $100 discretes are too little.

Actually, the integrated graphics “learned” to cope with the low-quality mode (in this game it’s not that low, it should be noted - the minimum supported resolution of 1024 x 768 was only recently often used in practice). And not all of them - the A6 based on Llano was the first to break through the border, and the transition to Trinity was even a step back in this family (since the game can fully use multi-core processors), but, in general, there are enough of them. But there are no slower solutions. However, we again observe that within the framework of the new Intel platforms Even Pentium is “enough”, but most products for the previous one could not cope due to the weakness of the mass-produced HDG 2500. That is, in fact, we have a transition from quantity to quality - what many Core i5 “could not” a year ago, today Pentium “can”. Or any Core i3, and not individual models of this family. Well, that's good too.

Summary results

What do we have in the bottom line? If you remember that 100 points is a Radeon HD 6450 paired with a Celeron, then that’s a lot. Indeed, mainstream graphics for LGA1155 (and this is the HDG 2500 and its analogue in Celeron/Pentium or generally weak even functionally IGP Sandy Bridge) failed to even reach this level. The new Pentiums surpass it, i.e. the GPU built into them easily outperforms such discrete products as the mentioned Radeon HD 6450 or GeForce GT 610/620. It is clear that all of them can be called gaming solutions only out of politeness, but they exist and are still sold (not to mention older video cards of a comparable level, which continue to be used by many budget-conscious computer users). In addition, the A4 for the FM1 platform is also left behind - also a basic level, of course, and even for an outdated platform two years ago, but a couple of years ago few believed that Intel would even be able to catch up with AMD in the foreseeable future: Sandy Bridge graphics in any version could not be compared with desktop APUs of all modifications.

At first glance, Core i3 has grown weaker - the HDG 4400 is only 20% faster than the HDG 4000, not one and a half times. Which is easily explained - if in the budget segment the number of conveyors increased from 6 to 10, then on the “floor above” only from 16 to 20. However, do not forget that the 4000 in the previous generation was a top-end GPU, and used only in a small part of desktop processors, and the 4400 is the lower level of the new desktop Core: most already use the HDG 4600, which has slightly higher performance. In fact, we can even talk about the transition from quantity to quality - just a year ago, only the HDG 4000 (that very rare option) could provide frame rates in games at the level of AMD APU A4 line, but now parity has already formed with the faster A6. Naturally, this does not in any way look like a victory - after all, in terms of price, even the A8 is at the Pentium level, and the Core i3 is faster, but also noticeably more expensive processors, but the fact of gradual leveling of positions is taking place. However, the release of APUs based on Kaveri will quite possibly be able to restore the status quo, but the mass distribution of these devices (and their promotion to the lower segments of the AMD range) will have to wait. And the replacement of Trinity with Richland, as we already wrote, was only a cosmetic update. Not at all like going from Ivy Bridge to Haswell.

Of course, the “building up of integrated muscles” in the products of both vendors is increasingly narrowing the potential areas of application for junior discrete solutions. The new GT 630 turned out to be only slightly faster than the old one (the bottleneck is the memory system) and still lags behind the A8/A10. And the gap from junior solutions from AMD and Intel has already narrowed so much that purchasing a discrete video adapter of this level has ceased to be a justifiable measure at all - the performance gain does not compensate for the extra costs and other shortcomings of the approach. In general, the only thing that video cards in this segment can claim to be is the modernization of old computers. And even here, a more attractive solution in most cases would be to either buy a faster discrete or simply replace the platform.

Well, you can gradually stop paying attention to the minimum settings modes - all modern solutions can handle them. In any case, desktop surrogate systems still cannot boast of comfortable results even when the graphics are simplified to the level of ten years ago.

OpenCL

Despite active talk about heterogeneous computing, the scope of its application remains very limited. Especially when it comes to those areas that apply to integrated graphics - the use of discrete GPUs for some of the "heavy" calculations in the HPC field began several years ago, but this has little relevance to the mass market. And the main problem for the latter was, it seems to us, that OpenCL is not at all as “open” as it was declared. In fact, programmers are forced to take into account the specific implementation features of all three vendors, i.e., work at too low a level. At one time, WinZip turned out to be a typical example of the immaturity of the technology - despite the victorious reports about the release of an application of at least some general purpose with OpenCL support, not everyone noticed that we were talking about supporting only the AMD implementation, but not Intel and NVIDIA.

Interestingly, these features still appear even in synthetic benchmarks, many of which simply execute different branches of code on different solutions. In particular, this is the case with Basemark CL, which we started using some time ago as part of tests of this line. What this leads to in practice is clearly seen in our study of the programs themselves: this utility is clearly partial to AMD GPUs. And if you also remember that not so long ago Intel processors executed OCL code only on the main cores, but without using the GPU, it becomes clear why this particular program became AMD’s favorite benchmark, the use of which was recommended to all testers. Recently, however, they stopped recommending it. Let's try to understand why, considering, naturally, that Basemark CL must be used very carefully for cross-platform comparison.

In the diagram we have collected the results of all processors tested in this program, which paints an extremely interesting picture. Firstly, as we see, the HDG 2500 or the “numberless” relative of this GPU provide performance only at the level of low-end mobile solutions. It’s clear why - the code is well parallelized, so six pipelines are six pipelines, be it in a CULV Celeron or in a desktop Core i3. But Pentium on Haswell is already much faster. However, it’s still not possible to consider it as a serious OpenCL accelerator: it still doesn’t reach the A6 or processors with HDG 4000 (again, it doesn’t matter: mobile or desktop). But certain preferences when using OpenCL can be obtained with its help - at least greater than those received by the buyer of any processors based on the AMD Kabini core. But the HDG 4400 is a much more attractive option: as you can see, only the new generation Core i3 turned out to be equal to the top-end Core i7 of the previous one! And compared to competing products, this is not so bad - the level of some A8s. It is clear that they are cheaper, but the difference in price with Junior Core i3 is still much smaller than with older Core i7 :) And HDG 4600 is already level A10. Moreover, it is easy to see that all thrifty buyers can get big benefits from the implementation of OpenCL, and not just those who choose AMD products: the difference between i3 and i7 is less than 10%. In general, only Kaveri's results spoil the victorious reports - AMD once again managed to jump above its head. But there are few of these APUs yet, unlike the Core i3s lying on every corner. In addition, cheaper and more productive on classic x86 code, which is extremely important given the current state of affairs with the implementation of OpenCL (a processor that is faster in a large number of programs and slower in a small number of programs looks more attractive than one that wins only in exotic specially selected environment).

There is no need to comment on the results of the GT 630 - as has been noted more than once, NVIDIA does not like this benchmark (and is used in in this case OpenCL 1.1 code, not 1.2). On the other hand, no one is safe from repetition of such a situation in real programs. Well, in this case, as we see, the low-end discrete graphics can easily lag behind even inexpensive integrated graphics. Which is an extra nail in her coffin :)

Total

If when choosing a high-level processor (and even assuming the use of a discrete video card), no one was able to find any particular advantages of Haswell over Ivy Bridge, then in the budget segment and when using integrated graphics the situation is the opposite: there is no point in buying “old” processors. Perhaps to upgrade the system to Sandy Bridge while maintaining the motherboard, but here it’s better to just buy a video card - it’s cheaper and more efficient. And the new system is exclusively based on LGA1150. In that case, of course, if you choose from Intel solutions, as you can see, the gap with AMD APUs has greatly decreased, but has not disappeared completely. Thus, if you want to save money and focus primarily on the performance of the graphics core, the FM2/FM2+ platform is still a good choice: the same A8-6600K is cheaper than any Core i3, and the A8-5600K can compete in price with the Pentium. Naturally, in this case, we should not forget that this saving is not at all free - the processor part is very different, which is often very important (at least in this segment), and in the case of a subsequent purchase of a discrete video card, you will have to pay extra for a “good” one. the integrated GPU will disappear completely. In addition, the “appetites” of AMD APUs are somewhat higher than those typical for dual-core Intel processors. In general, they are not direct competitors, but, we repeat, if the performance of integrated graphics comes first, then it is better to continue to pay attention to AMD’s developments - the new generation of devices from Intel has reduced the gap in this matter, but not to zero. even if we ignore the price difference.

Well, in a global sense, we are certainly pleased with the progress. Especially when it comes to basic performance levels. You can, of course, once again scold Intel for some confusion - after all, this is already the fourth graphics core with the faceless name “HD Graphics”, but what is more important is that its performance has increased by the traditional one and a half times. This does not make HDG a gaming solution, but the very fact of “raising the bar” is already a good signal to programmers. Yes, and the order has increased - after all, up to and including Ivy Bridge, the “main” level of Intel graphics in the desktop segment coincided with the “basic” one: the most popular GPU was the HDG 2500. Now the Core i3 differs from the Pentium not only in its support for Hyper-Threading, but also more powerful graphics: at least HDG 4400, and this video core is already better than any Ivy Bridge GPU. Even if not one and a half times, but this (and higher) level graphic capabilities Now every customer gets it - there is no need to chase after special processor models. Which, again, allows us to count on more complete utilization of it by programmers.

And, of course, such an increase in the graphic capabilities of lower-end processors is another nail in the coffin of budget discrete video cards. Although the performance advantage is still there even in the $60 segment, it is already too small to justify buying a separate device rather than using the “free” IGP. That is, only video cards costing $100 or more remain practical. And only for gaming use - in all other areas, integrated graphics are no worse, and, most importantly, any integrated graphics are no worse, and not just a few models, as was the case two or three years ago.

When purchasing a laptop from one of critical issues for any buyer is the choice of the type of graphics core: integrated or discrete. If you play computer games, then you will definitely need a laptop with a dedicated graphics system; if you want to play comfortably, run games on high graphics settings and high display resolutions, for example, Full HD (1080p), then in this case you you will have to fork out for a laptop with at least an entry-level discrete gaming video card such as nVidia Ge Force GTX 850\950M, but as a rule, the cost of such laptops exceeds 50,000 rubles.

What should you do if you want to play on a laptop, but don’t have the money for a high-performance machine? There is certainly a way out of this situation, but only if your needs for 3D graphics are limited to three-dimensional user interfaces, and in computer games you will be content with low graphics settings and low resolutions, in such cases a laptop with integrated GPU processor will come in handy. Laptops with built-in graphics solutions are usually sold cheaper, and the performance level of some built-in video cards has recently been on par with discrete video cards in the lower and even mid-price ranges. For a long time, the integrated graphics market was entirely dominated by Intel, and the level of performance of integrated graphics in 3D applications was below all criticism. However, it was originally intended for the corporate sector of the market and fully satisfied its needs, but as time passed, more and more performance began to be required from the integrated graphics. Soon Intel caught up with AMD and for a while it even managed to get ahead with its hybrid APUs, but with the release this year of new processors based on the Broadwell and Skylake architecture from intel, the performance of embedded solutions in 3D applications from both companies almost equal.

So, let's look at what AMD and Intel are currently offering us in the integrated mobile graphics segment.

New generation of integrated graphics from Intel.

Let's start with Intel. An interesting feature that first appeared in the Intel Sandy Bridge processor architecture was the integrated video core. This meant that, despite having a discrete graphics solution in your laptop, you could always use additional capacity processor, which made it possible to easily encode video, watch high-definition movies, view 3D content and run simple games. Today in the composition Skylake includes an integrated graphics card, which is in many ways superior to similar solutions in previous processors. The ninth generation of integrated graphics subsystem – Intel Gen9 Graphics, implemented as part of a new architecture, and, like the entire Skylake chip, manufactured in compliance with the 14-nm process technology, has received powerful structural changes along with increased energy efficiency. Inheriting the basic features from the previous Broadwell architecture, the new graphics includes a huge range of solutions, from the basic logic HD Graphics 510(GT1e) based on one module with 12 actuators up to the most powerful graphics subsystem Iris Pro Graphics 580(GT4e) based on three modules with 72 actuators, a built-in eDRAM buffer with a capacity of 128 MB, with a total peak performance of up to 1152 gigaflops (Gen9 GT4 is approximately one and a half times more than Gen8 GT3). 9th Gen graphics performance varies significantly, with integrated graphics being the lowest performers HD Graphics 510(GT1e), Graphics 515(GT2e) and Graphics 520(GT2e), these solutions will become an integral part of the Core M family of processors. Built-in video cards as part of the Core M CPU, in best case scenario Only old games will run on low graphics settings. Behind them in terms of performance is the built-in graphics core HD Graphics 530 (GT3e), which will become an integral part of some processors of the Core i5, Core I7 line; in terms of performance, this graphics solution can easily handle many computer games, although only at a display resolution of no more than 720p( HD), and at low, and in some gaming applications, at medium graphics settings. Essentially graphics performance HD Graphics 530 corresponds to the GeForce 920M discrete video card. IN next group can be distinguished HD Graphics 540 And HD Graphics 550 this integrated graphics will most likely become an integral part of UVL processors based on the Skylake architecture, from HD Graphics 530 these two solutions differ in twice the amount actuators 48 versus 24 HD Graphics 530 other characteristics of all three built-in video cards are the same frequency characteristics are 300-1150 MHz, and the memory bandwidth is 64/128 bits. By performance HD Graphics 540\550 approximately correspond to the discrete video card GeForce 920M. Well, the high-performance graphics core closes the line of integrated video cards from Intel. Iris Pro Graphics HD Graphics 580 (GT4e), which is Intel's most powerful integrated graphics solution to date. How does the manufacturer promise performance? Graphics 580 in 3D applications it will be comparable to the desktop video card NVIDIA GeForce GTX 750, GT4e should provide performance of 1.15 Gflops; the increase relative to GT3e (Broadwell) will be about 50%. Just in time for the appearance of Windows 10, the new Intel graphics now feature full hardware support for Direct X 12 for games, as well as Open CL 2.0 and Open GL 4.4 technologies for clearer and higher-quality images. According to Intel, the new graphics will provide performance gains in 3D games up to 40% compared to the previous generation. The new ninth generation of Intel graphics also supports an expanded list of hardware acceleration functions for encoding and decoding (HEVC, AVC, SVC, VP8, MJPG), expanded capabilities for processing and converting raw data directly from a 16-bit matrix digital camera with quality up to 4K 60p, as well as advanced Quick Sync engine capabilities with Video Fixed-Function (FF) mode, allowing H.265/HEVC decoding without accessing the computing cores.

Specifications

HD Graphics 5xx
Manufacturer
intel
Architecture
Skylake GT2e Skylake GT3e Skylake GT4e
Name
HD Graphics 510 HD Graphics 515 HD Graphics 520 HD Graphics 530 HD Graphics 540 HD Graphics 550 HD Graphics 580
Actuators
12 24 24 24 48 48 72
Core clock speed
300-950 MHz 300-1000 MHz 300-1050 MHz 300-1150 MHz 300-1050 MHz 300-1100 MHz no data MHz
Memory bus width
64\128 Bit
eDRAM
No 128 MB
DirectX
DirectX 12
Technology
14 n.m.

New generation of integrated graphics from AMD.

AMD Carrizo- this is the sixth generation of AMD Carrizo mobile APUs - these are the world's first performance-class APUs, completely located on one chip, whereas previously in chips of this class the graphics chip or south bridge, if they were located on the same substrate as the processor, was in the form of a separate crystal. Here, the north bridge, Fusion Controller Hub (south bridge), graphics and processor cores fit on one chip, grown within the 28-nm Global Foundries process. Carrizo uses graphics that AMD itself calls third-generation GCN. In the third generation, the architecture underwent some changes - in fact, this generation of GCN was used in the Tonga GPU (Radeon R9 285). Also, the built-in graphics core received 512 KB of its own second-level cache. Among other things, support for DirectX 12 (Level 12), improved performance when working with tessellation, lossless color compression, an updated ISA instruction set, connectivity between CPU and GPU caches and a high-quality scaler are announced. In Carrizo, the Radeon R7 graphics controller has 8 computing clusters, while the Kaveri mobile versions had only six such units, that is, the Carrizo graphics core has 512 stream processors and is capable of delivering peak performance of up to 819 GFLOPS. Carrizo has three built-in display controllers and supports image output up to 4K resolution. The sixth generation A-Series is also the first notebook solution to support HEVC hardware decoding, HSA 1.0 heterogeneous system architecture, and ARM TrustZone technology. The manufacturer especially emphasized the new processors' support for the functionality that was released. The presence of a hardware H.265/HEVC decoder in the new AMD Carrizo processors allows not only smoother playback of high-definition video, but also provides significantly longer battery life. Windows 10 operating system, including DirectX 12 graphics optimizations. AMD's 6th Gen notebook processors feature discrete graphics-grade GPUs and Graphics Core Next (GCN) architecture to deliver up to 2x the performance of the competition. Thanks to this, the user gets the opportunity to play the most popular games on the laptop. Online Games in HD resolution, including: DoTA 2, League of Legends and Counter Strike: Global Offensive. In other games, the increase in fps compared to Kaveri will be from 30 to 40% / We also note that the technology AMD Dual Graphics allows you to use 6th generation laptop processors and graphics in tandem AMD cards Radeon R7 Mobile, which makes it possible to increase frame rates up to 42%, and proprietary AMD FreeSync technology ensures highly smooth gameplay. Note that the processor supports multi-threaded APIs, including DirectX 12, Vulkan and Mantle, which allow the use of advanced gaming technologies aimed at improving performance and image quality. The AMD Radeon Rx integrated graphics lineup starts with the integrated AMD Radeon R7 Mobile graphics core, this graphics adapter is the most powerful in the line. AMD Radeon R7(Carrizo) – an integrated video card in the Carrizo APU, at the time of the announcement (mid-2015) used in the AMD FX-8800P SoC with 512 GCN shaders and a frequency of 800 MHz. Depending on the TDP configuration (12-35 W) and the RAM used (up to DDR3-2133 in dual-channel mode), performance may vary significantly. Next comes AMD Radeon R6(Carrizo) is a low-end integrated video card announced in mid-2015. It is designed for Carrizo APUs, for example, AMD A10-8700P or A8-8600P, and has 384 GCN shaders and 720 respectively. The graphics offer two configurations, differing in TPD (from 12 to 35 W) and the type of memory used (up to DDR3-2133 in dual-channel mode). The next graphics accelerator closes the line Radeon R5(Carrizo), which is built into some processors, such as the AMD A6-8500P. Its performance is barely enough for even the most undemanding games from 2 years ago (Tomb Raider, Dead Space 3, BioShock Infinite) on minimum settings in games like Crysis 3 or Battlefield 4, this video accelerator produces a maximum of 10-20 frames per second. Built-in video card Radeon R5(Carrizo) has in its arsenal 256 shader processors (4 GCN modules) operating at a frequency of 800 MHz. As for the integrated graphics Radeon R4\R3\R2, its capabilities are sufficient, at best, for games 4-5 years old.

Specifications

AMD Radeon Rx
Manufacturer
AMD
Architecture
Carrizo
Name
AMD Radeon R7 AMD Radeon R6 AMD Radeon R5
Shader processors
512 384 256 128(Carrizo-L)
Core clock speed
800 (Boost) MHz 850 (Boost) MHz
Memory bus width
64\128 Bit 64 Bit
Memory type
no own video memory
DirectX
DirectX 12
Technology
28 n.m.

Synthetic tests

First, let's look at the performance of the built-in graphics in a synthetic test 3DMark (2013)- Fire Strike Standard Score at 1920x1080 pixel resolution.

Intel Iris Pro Graphics 6200-(Core i7 5950HQ)

Intel Iris Pro Graphics 5100-(Core i5 4158U)

Kaveri AMD Radeon R5-(AMD A8-7200P)

Kaveri AMD Radeon R4-(AMD A6 Pro-7050B)

In the synthetic 3D Mark Fire Strike test, as one would expect, AMD's integrated graphics are slightly behind Intel's graphics solutions. Both in the segment of high-performance solutions and among budget video cards. If everything is clear with synthetic tests, then it will still be interesting to see how the integrated graphics behave in real gaming applications. In our opinion, there is no point in focusing on the performance of the integrated graphics of processors such as Core i7 4750HQ and the like, which are intended for enthusiasts and gamers. In 99% of cases, the laptop will have a more powerful discrete 3D card installed. We also note that “heavy” graphics settings reveal a number of games where the potential of even such graphics as Iris Pro Graphics will clearly not be enough. Acceptable performance in the coveted Full HD resolution will be achieved only by reducing the graphics quality to a minimum or, at best, an average level.

Call of Duty: Advanced Warfare- developed over three years, taking into account all possibilities gaming systems new generation. An updated approach to game creation will allow you to use new tactics. Advanced military technologies and a unique exoskeleton will help you survive where an ordinary soldier would not last even five minutes! In addition, you will find an exciting plot and new characters, one of whom is played by Oscar winner Kevin Spacey. The game engine for Call of Duty Advanced Warfare is a product of Sledgehammer Games' own development. There is practically no information on the network about the structure and development of this engine. Most likely, the engine is a further development of the product line for games based on its own intellectual property Sledgehammer Games studio.

720p (HD) Low

720p (HD) Normal

NVIDIA GeForce GTX 850M+(Core i7 4720HQ)

NVIDIA GeForce GTX 850M+(Core i7 4720HQ)

Intel Iris Pro Graphics 5200-(Core i7 4750HQ)

Intel Iris Pro Graphics 5200-(Core i7 4750HQ)

Intel Iris Pro Graphics 6100-(Core i5 5257U)

Intel Iris Pro Graphics 6100-(Core i5 5257U)

Intel HD Graphics 530-(Core i7 6700HQ)

Intel HD Graphics 530-(Core i7 6700HQ)

Intel HD Graphics 5600-(Core i7 5700HQ)

Intel HD Graphics 5600-(Core i7 5700HQ)

Intel HD Graphics 5500-(Core i5 5300U)

Intel HD Graphics 5500-(Core i5 5300U)

Intel HD Graphics 4600-(Core i5 4210M)

Intel HD Graphics 4600-(Core i5 4210M)

Intel HD Graphics 4400-(Core i7 4500U)

Intel HD Graphics 4400-(Core i7 4500U)

AMD Radeon R9 M370X+(Core i7 4870HQ)

AMD Radeon R9 M370X+(Core i7 4870HQ)

Carrizo AMD Radeon R7-(AMD FX-8800P)

Carrizo AMD Radeon R7-(AMD FX-8800P)

Kaveri AMD Radeon R7-(AMD FX-7600P)

Kaveri AMD Radeon R7-(AMD FX-7600P)

Carrizo AMD Radeon R6-(AMD A10-8700P)

Carrizo AMD Radeon R6-(AMD A10-8700P)

Kaveri AMD Radeon R6-(AMD A10-7400P)

Kaveri AMD Radeon R6-(AMD A10-7400P)

Carrizo AMD Radeon R5-(AMD A6-8500P)

Metro Last Light(Russian Metro: Ray of Hope) is a computer game in the first-person shooter genre, a sequel to the game Metro 2033. The sequel was developed on three main guiding principles: the first is to preserve the atmosphere of horror of the first part, the second is to diversify the set of weapons, the third is to improve technology Metro 2033. The developers from 4A Games also took into account some of the wishes of the players and promised this time to correct some errors and tweak artificial intelligence and stealth elements. The authors of “Metro: Last Light” decided not to take the events of Dmitry Glukhovsky’s second book as the basis for the plot. Instead, the game is a direct continuation of the first part with a rich linear plot. The main character of “Metro: Last Light” again becomes Artyom, who this time has to prevent a civil war between the inhabitants of the Moscow metro. Metro Last Light was developed on a modified version of the 4A Engine, which was used in Metro2033. Among the improvements, more advanced AI and optimization of the graphics engine should be noted. Thanks to the use of PhysX, the engine received many features, for example, destructible environments, simulation of bends on clothing, waves on water and other elements that are completely influenced by the environment. Metro Last Light is currently one of the most technological products of our time, even though the game was released not only for personal computers, but also for the current generation of game consoles.

720p (HD) Low (DX10)

720p (HD) Medium,(DX10) 4xAF

NVIDIA GeForce GTX 850M+(Core i7 4720HQ)

NVIDIA GeForce GTX 850M+(Core i7 4720HQ)

Intel Iris Pro Graphics 5200-(Core i7 4750HQ)

Intel Iris Pro Graphics 5200-(Core i7 4750HQ)

Intel Iris Pro Graphics 6100-(Core i5 5257U)

Intel Iris Pro Graphics 6100-(Core i5 5257U)

Intel HD Graphics 530-(Core i7 6700HQ)

Intel HD Graphics 530-(Core i7 6700HQ)

Intel HD Graphics 5600-(Core i7 5700HQ)

Intel HD Graphics 5600-(Core i7 5700HQ)

Intel HD Graphics 5500-(Core i5 5300U)

Intel HD Graphics 5500-(Core i5 5300U)

Intel HD Graphics 4600-(Core i5 4210M)

Intel HD Graphics 4600-(Core i5 4210M)

Intel HD Graphics 4400-(Core i7 4500U)

Intel HD Graphics 4400-(Core i7 4500U)

AMD Radeon R9 M370X+(Core i7 4870HQ)

AMD Radeon R9 M370X+(Core i7 4870HQ)

Carrizo AMD Radeon R7-(AMD FX-8800P)

Carrizo AMD Radeon R7-(AMD FX-8800P)

Kaveri AMD Radeon R7-(AMD FX-7600P)

Kaveri AMD Radeon R7-(AMD FX-7600P)

Carrizo AMD Radeon R6-(AMD A10-8700P)

Carrizo AMD Radeon R6-(AMD A10-8700P)

Kaveri AMD Radeon R6-(AMD A10-7400P)

How we tested

As part of our testing, we set ourselves the goal of comparing the performance of the new Intel HD Graphics 4000 and Intel HD Graphics 2500 graphics accelerators built into Ivy Bridge processors with the speed of previous and competing integrated GPUs and graphics cards in the lower price range. This comparison was carried out using desktop systems as an example, although the results obtained can easily be extended to mobile systems.

There are currently two current processors for desktop computers with integrated graphics that make sense to compare with Ivy Bridge: AMD Vision A8/A6 series and Intel's Sandy Bridge. It was with them that we compared the system, which was based on third-generation Core i5 processors equipped with Intel HD Graphics 2500 and Intel HD Graphics 4000 graphics cores. In addition, cheap discrete processors also took part in the tests AMD video cards six thousandth series Radeon HD 6450 and Radeon HD 6570.

Unfortunately, when comparing built-in video cores, we cannot ensure complete equality of other system characteristics. Different cores belong to different processors, differing not only in clock speed, but also in microarchitecture. Therefore, we had to limit ourselves to the selection of similar, but not identical configurations. In the case of LGA1155 platforms, we chose exclusively Core i5 series processors, and for comparison with them we used older AMD Vision processors of the Llano family. Discrete video cards were tested as part of a system with an Ivy Bridge processor.

As a result, the following hardware and software components were used in the tests:

Processors:

  • Intel Core i5-3570K (Ivy Bridge, 4 cores, 3.4-3.8 GHz, 6 MB L3, HD Graphics 4000);
  • Intel Core i5-3550 (Ivy Bridge, 4 cores, 3.3-3.7 GHz, 6 MB L3, HD Graphics 2500);
  • Intel Core i5-2500K (Sandy Bridge, 4 cores, 3.3-3.7 GHz, 6 MB L3, HD Graphics 3000);
  • Intel Core i5-2400 (Sandy Bridge, 4 cores, 3.1-3.4 GHz, 6 MB L3, HD Graphics 2000);
  • AMD A8-3870K (Llano, 4 cores, 3.0 GHz, 4 MB L2, Radeon HD 6550D);
  • AMD A6-3650 (Llano, 4 cores, 2.6 GHz, 4 MB L2, Radeon HD 6530D).

Motherboards:

  • ASUS P8Z77-V Deluxe (LGA1155, Intel Z77 Express);
  • Gigabyte GA-A75-UD4H (Socket FM1, AMD A75).

Video cards:

  • AMD Radeon HD 6570 1 GB GDDR5 128-bit;
  • AMD Radeon HD 6450 512 MB GDDR5 64-bit.

Memory: 2x4 GB, DDR3-1866 SDRAM, 9-11-9-27 (Kingston KHX1866C9D3K2/8GX).

Disk subsystem: Crucial m4 256 GB (CT256M4SSD2).

Power unit: Tagan TG880-U33II (880 W).

Operating system: Microsoft Windows 7 SP1 Ultimate x64.

Drivers:

  • AMD Catalyst 12.4 Driver;
  • AMD Chipset Driver 12.4;
  • Intel Chipset Driver 9.3.0.1019;
  • Intel Graphics Media Accelerator Driver 15.28.0.64.2729;
  • Intel Rapid Storage Technology 10.8.0.1003.

The main emphasis in this testing was quite naturally placed on gaming applications of the integrated processor graphics. Therefore, the bulk of the benchmarks we used were games or specialized gaming tests. Moreover, to date, the power of integrated video accelerators has grown so much that it allowed us to conduct performance research not only in the low resolution of 1366x768, but also in the Full HD resolution of 1980x1080, which has become the de facto standard for desktop systems. True, in the latter case we were limited to the choice low settings quality.

⇡ 3D performance

In anticipation of the results of performance testing, it is necessary to say a few words about the compatibility of HD Graphics 4000/2500 graphics accelerators with various games. Previously, it was quite a typical situation when some games with Intel graphics worked incorrectly or did not work at all. However, progress is obvious: slowly but surely the situation is changing for the better. With each new version of the accelerator and driver, the list of fully compatible gaming applications expands, and in the case of HD Graphics 4000/2500 it is already quite difficult to encounter any critical problems. However, if you are still skeptical about the capabilities of Intel graphics cores, then on the Intel website there is an extensive list (,) of new and popular games tested for compatibility with HD Graphics, which are guaranteed to have no problems and in which an acceptable level of performance is observed.

⇡ 3DMark Vantage

3DMark family test results are a very popular metric for assessing the weighted average gaming performance of video cards. That's why we turned to 3DMark first. The choice of the Vantage version is due to the fact that it uses DirectX version ten, which is supported by all video accelerators participating in the tests.

The first diagrams very clearly show the huge leap in performance that the graphics cores of the HD Graphics family have made. HD Graphics 4000 demonstrates a more than twofold advantage over HD Graphics 3000. The younger version of the new Intel graphics does not lose face either. HD Graphics 2500 is almost twice as fast as HD Graphics 2000, even though both of these accelerators have the same number of execution units.

⇡ 3DMark 11

More latest version 3DMark is focused on measuring DirectX 11 performance. Therefore, integrated graphics accelerators of second-generation Core processors are excluded from this test.

The graphics core of Ivy Bridge processors was the first of Intel's accelerators to pass the test in 3DMark 11, and we did not notice any complaints about image quality when running this DirectX 11 test. The performance of HD Graphics 4000 is also quite good. It outperforms the entry-level discrete video card Radeon HD 6450 and the Radeon HD 6530D accelerator built into the AMD A6-3650 processor, second only to the older version of the integrated core of AMD Llano processors and the Radeon HD 6570 video card, which costs about $60-70. The younger modification of modern Intel graphics, HD Graphics 2500, is in last place. Obviously, the ruthless reduction in the number of actuators that has befallen it has a significant impact on game performance.

⇡ Batman Arkham City

The group of real game tests opens with the relatively new game Batman Arkham City, built on the Unreal Engine 3.

As can be seen from the results, the performance of integrated Intel graphics has increased so much that it allows you to play fairly modern games at full Full HD resolution. And although there is no talk of good image quality and a completely comfortable number of frames per second, this is still a strong leap forward, perfectly illustrated by the 55 percent advantage of HD Graphics 4000 over HD Graphics 3000. In general, HD Graphics 4000 overtakes what is integrated into AMD The A6-3650 core Radeon HD 6530D and discrete Radeon HD 6450 graphics card are slightly behind the AMD A8-3850K with its Radeon HD 6550D GPU. True, the younger version of the integrated Ivy Bridge core, HD Graphics 2500, cannot boast of such significant achievements in performance. Although its result exceeds the HD Graphics 2000 by 40-45 percent, the graphics of quad-core Llano processors, like $40 video cards, are noticeably faster.

⇡Battlefield 3

The most popular first-person shooter on the graphics built into Ivy Bridge processors does not turn fast enough. In addition, during testing we encountered some problems with the display of the game menu. However, the overall performance assessment of the new generation of HD Graphics solutions remains unchanged. The four thousandth accelerator is somewhat faster graphics AMD A6-3650 and Radeon HD 6450 video cards, however, are inferior to the older modification of the video core of Llano processors and miserably lose to the discrete Radeon HD 6570 video card.

⇡ Civilization V

Popular step-by-step strategy favors graphics solutions with AMD architecture; they take first place here. The results of Intel graphics are not very good, even the HD Graphics 4000 lags significantly behind both the internal Radeon HD 6530D and the external Radeon HD 6450.

⇡ Crysis 2

Crysis 2 can easily be considered one of the most difficult computer games for video accelerators. And this, as we see, affects the correlation of results. Even taking into account the fact that during testing we did not enable DirectX 11 mode, the Intel HD Graphics 4000 in the Core i5-3750K processor performed poorly and lost to both the A6-3650 processor graphics and the discrete Radeon HD 6450 graphics card. In fairness, it should be noted that The advantage of Ivy Bridge over Sandy Bridge remains more than significant, and it is observed both on the example of older versions of accelerators and with younger ones. In other words, the strength of the new graphics core is based only partly on the increase in the number of execution units. Even without this, HD Graphics 2500 is about 30 percent superior to HD Graphics 2000.

⇡ Dirt 3

In Dirt 3 the situation is typical. HD Graphics 4000 is about 80 percent faster than the older version of the graphics core from Sandy Bridge processors, and HD Graphics 2500 is 40 percent faster than the built-in video accelerator HD Graphics 2000. The result of this progress is that in terms of speed, a system based on the Core i5-3750K without an external video card is in the middle between integrated systems with AMD A8-3870K and AMD A6-3650 processors. Discrete video cards can compete with the new and fast version of HD Graphics, but only starting with the Radeon HD 6570: slower budget solutions are inferior to Intel's four thousandth accelerator.

⇡Far Cry 2

Look: in a popular four-year-old shooter, the performance of modern integrated graphics developed by Intel is already quite sufficient for a comfortable game. True, so far with low image quality. Nevertheless, the diagram clearly shows how rapidly the speed of integrated Intel solutions grows with the change in processor generations. If we assume that with the advent of Haswell processors this pace will be maintained, then we can expect that next year discrete video cards of the Radeon HD 6570 level will become unnecessary.

⇡ Mafia II

In Mafia II, the graphics built into AMD processors look stronger than even the HD Graphics 4000. This applies to both the Radeon HD 6550D and the slower version of the integrated accelerator from the Vision class APU, the Radeon HD 6530D. So once again we are forced to state that AMD Llano has a more advanced video core than Ivy Bridge. And the new processors of the Vision family with the Trinity design coming out soon will, of course, be able to push HD Graphics even further away from the leading position. Nevertheless, it is impossible to deny the improvement of Intel graphics that is taking place by leaps and bounds. Even the younger version of the accelerator built into Ivy Bridge, HD Graphics 2500, looks very impressive compared to its predecessors. With only six actuators, it is almost as fast as the HD Graphics 3000 from Sandy Bridge, which has twelve actuators.

⇡War Thunder: World of Planes

War Thunder is a new multiplayer combat aviation simulator that is expected to be released in the near future. But even in this newest game, the integrated graphics cores, if you don’t turn up the quality settings, offer quite acceptable performance. Of course, discrete video cards in the mid-price range will allow you to get more pleasure from the gaming process, but modern Intel graphics cannot be called unsuitable for new games. This is especially true for the four-thousandth version of HD Graphics, which once again confidently outperformed the budget, but quite relevant discrete video card Radeon HD 6450. The younger graphics from Ivy Bridge look much worse, its performance is about half as low, and as a result it is significantly inferior in speed not only to discrete graphics accelerators, but also to integrated video accelerators built into quad-core Socket FM1 processors from AMD.

⇡ Cinebench R11.5

All of the games we tested were applications that used the DirectX programming interface. However, we also wanted to see how the new Intel accelerators would handle work in OpenGL. Therefore, to the purely gaming tests, we added a small study of performance when working in the professional graphics package Cinema 4D.

As the results show, no fundamental differences in the relative performance of HD Graphics are observed in OpenGL applications. True, HD Graphics 4000 still lags behind any variants of integrated and discrete AMD accelerators, which, however, is quite natural and is explained by better optimization of their driver.

⇡ Video performance

There are two concepts involved in working with video in the case of HD Graphics graphics cores. On the one hand, this is the playback (decoding) of high-resolution video content, and on the other, its transcoding (that is, decoding followed by encoding) using Quick Sync technology.

As for decoding, the characteristics of the new generation of graphics cores are no different from what came before. HD Graphics 4000/2500 supports full hardware video decoding in AVC/H.264, VC-1 and MPEG-2 formats via the DXVA (DirectX Video Acceleration) interface. This means that when playing video using DXVA-compatible software players, the load on the processor's computing resources and its power consumption remain minimal, and the work of decoding the content is performed by a specialized unit that is part of the graphics core.

However, exactly the same thing was promised in Sandy Bridge processors, but in practice in a number of cases (when using certain players and when playing certain formats) we encountered unpleasant artifacts. It is clear that this was not due to any hardware flaws in the decoder built into the graphics core, but rather to software flaws, but this does not make it any easier for the end user. By now, it seems that all childhood illnesses have already gone away, and modern versions of players cope with video playback in systems with new generation HD Graphics without any complaints about image quality. At least, on our test set of videos of various formats, we could not notice any image defects in any freely distributed Media Player Classic Home Cinema 1.6.2.4902 or VLC media player 2.0.1, nor in the commercial Cyberlink PowerDVD 12 build 1618.

When playing video content, the processor load is also expectedly low, because the main work falls not on the computing cores, but on the video engine located in the depths of the graphics core. For example, playing Full HD video with subtitles turned on loads the Core i5-3550 with the HD Graphics 2500 accelerator, on which we tested it, by no more than 10%. Moreover, the processor remains in an energy-saving state, that is, it operates at a frequency reduced to 1.6 GHz.

It must be said that the performance of the hardware decoder is easily enough for simultaneous playback of several Full HD video streams at once, and for playback of “heavy” 1080p videos encoded with a bitrate of about 100 Mbit/s. However, it is still possible to “bring the decoder to its knees”. For example, when playing an H.264 video encoded in a resolution of 3840x2160 with a bitrate of about 275 Mbps, we were able to observe frame drops and stuttering, despite the fact that Intel promises support for hardware video decoding in large formats. However, the specified QFHD resolution is used very, very rarely at the moment.

We also checked the operation of the second version of Quick Sync technology, implemented in Ivy Bridge processors. Since Intel promises increased transcoding speeds with the new graphics cores, our primary focus was on performance testing. In our hands-on testing, we measured the transcoding time of one 40-minute episode of a popular TV series encoded in 1080p H.264 at 10 Mbps for viewing on an Apple iPad2 (H.264, 1280x720, 4Mbps). For tests, we used two utilities that support Quick Sync technology: Arcsoft Media Converter 7.5.15.108 and Cyberlink Media Espresso 6.5.2830.

The increase in transcoding speed is impossible not to notice. The Ivy Bridge processor, equipped with the HD Graphics 4000 graphics core, copes with the test task almost 75 percent faster than the previous generation processor with the HD Graphics 3000 core. However, the stunning increase in performance seems to have occurred only with the older version of the Intel graphics core. At least, when comparing the transcoding speed of the HD Graphics 2500 and HD Graphics 2000 graphics cores, no such striking gap is observed. Quick Sync in the younger version of Ivy Bridge graphics works significantly slower than in the older one, as a result of which processors with HD Graphics 2500 and HD Graphics 2000 produce performance that differs by about 10 percent when transcoding video. However, there is no need to grieve over this. Even the slowest version of Quick Sync is so fast that it leaves far behind not only software decoding, but also all the Radeon HD options that speed up video encoding with its programmable shaders.

Separately, I would like to touch upon the issue of video transcoding quality. Previously, there was an opinion that Quick Sync technology gives significantly worse results than accurate software transcoding. Intel did not deny this fact, emphasizing that Quick Sync is a tool for quickly obtaining results, and not at all for professional mastering. However, in the new version of the technology, according to the developers, the quality has been improved due to changes in the media sampler. Was it possible to achieve the quality level of software decoding? Let's look at the screenshots that show the result of transcoding the original Full HD video for viewing on the Apple iPad 2.

Software transcoding, x264 codec:

Transcoding using Quick Sync technology, HD Graphics 3000:

Transcoding using Quick Sync 2.0 technology, HD Graphics 4000:

To be honest, no fundamental qualitative improvements are visible. Moreover, it seems that the first version of Quick Sync gives even better results - the image is less blurry and fine details are visible more clearly. On the other hand, the excessive clarity of the picture on HD Graphics 3000 adds noise, which is also an undesirable effect. One way or another, to achieve the ideal, we are again forced to advise turning to software transcoding, which can offer higher-quality conversion of video content, at least due to more flexible settings. However, if you plan to play the video on any mobile device with a small screen, using Quick Sync of both the first and second versions is quite reasonable.

⇡ Conclusions

The pace taken by Intel in improving its own integrated graphics cores is impressive. It would seem that just recently we admired the fact that Sandy Bridge graphics suddenly became capable of competing with entry-level video cards, but in the new generation of Ivy Bridge processor design its performance and functionality made another qualitative leap. This progress looks especially striking given the fact that the Ivy Bridge microarchitecture is presented by the manufacturer not as a fundamentally new development, but as a transfer of an old design to a new technological framework, accompanied by minor modifications. But nevertheless, with the release of Ivy Bridge, the new version of the integrated HD Graphics graphics cores received not only higher performance, but also support for DirectX 11, and improved Quick Sync technology, and the ability to perform general-purpose calculations.

However, in fact, there are two options for the new graphics core, and they differ significantly from each other. The older modification, HD Graphics 4000, is exactly what makes us so excited. Its 3D performance compared to that in HD Graphics 3000 has increased by an average of about 70 percent, which means that the speed of HD Graphics 4000 is somewhere between the performance of modern discrete video accelerators Radeon HD 6450 and Radeon HD 6570. Of course, for integrated graphics are not a record, the video accelerators built into older processors of the AMD Llano family still work faster, but the Radeon HD 6530D from the processors of the AMD A6 family is already defeated. And if we add to this the Quick Sync technology, which now works 75 percent faster than before, it turns out that the HD Graphics 4000 accelerator has no analogues and may well become a desirable option for both mobile computers and non-gaming desktops.

The second modification of Intel's new graphics core, HD Graphics 2500, is noticeably worse. Although it also gained support for DirectX 11, this is actually more of a formal improvement. Its performance is almost always lower than the speed of HD Graphics 3000, and there is no talk of any competition with discrete accelerators. Strictly speaking, HD Graphics 2500 looks like a solution in which full-fledged 3D functionality is left just for show, but in fact no one is seriously considering it. That is, HD Graphics 2500 is a good option for media players and HTPCs, since no video encoding and decoding functions are cut off in it, but it is not an entry-level 3D accelerator in the modern sense of the term. Although, of course, many games of previous generations can run quite well on HD Graphics 2500.

Judging by the way Intel disposed of the placement of HD Graphics 4000/2500 graphics cores in the processors of its model range, the company’s own opinion about them is very close to ours. The older, four-thousandth version is aimed mainly at laptops, where the use of discrete graphics causes a serious blow to mobility, and the need for integrated and productive solutions is very high. In desktop processors, HD Graphics 4000 can only be obtained as part of rare special offers or as part of expensive CPUs, in which it is somehow “not comme il faut” to place stripped-down versions of something. Therefore, most Ivy Bridge processors for desktop systems are equipped with an HD Graphics 2500 graphics core, which has not yet exerted serious pressure on the discrete video card market from below.

However, Intel is making it clear that the development of integrated graphics solutions , like the competitor,— one of the most important priorities of the company. And if now processors with integrated graphics can have a significant impact only on the market of mobile solutions, then in the near future integrated graphics cores may take the place of discrete desktop video accelerators. However, time will tell how it will actually turn out.

IntroductionA few years ago the phrase “integrated Intel graphics” pointed to a graphic solution that was terrible in speed and quality, and I didn’t want to use it voluntarily. The first Intel chipset with an integrated Intel 810 video core had extremely low productivity, and not only in 3D modes, but even during everyday work in the operating system in 2D. A lot of time has passed since then, but before the release of Sandy Bridge generation processors, Intel developers were, in fact, only improving the 2D part of their integrated graphics. Three-dimensional capabilities remained for a long time at a frankly rudimentary level.

Sandy Bridge became a revolutionary processor in many aspects, including the fact that it was with it that Intel began to think about active development in its graphics cores and 3D parts. And since 2011, with each new generation of processors, the performance of 3D integrated graphics began to grow at a very noticeable pace. It is worth recalling that in 2011, another significant event for integrated graphics cores happened - the release of Llano hybrid processors, with which AMD staked out its place as a leader in integrated graphics. However, despite the fact that AMD does not sit idly by and actively continues to develop its video cores, increasing their power and introducing more and more new graphics architectures into them, Intel was able to reduce the gap from its competitor. Moreover, by now AMD can no longer be considered a leader in the performance of graphics cores built into processors, but in the segment of mass-market inexpensive solutions its position continues to be very good.

However, not so long ago, Intel representatives allowed themselves to make a rather bold statement that modern graphics cores used in Broadwell and Skylake processors and belonging to the Iris and Iris Pro classes offer performance quite sufficient for mass gaming systems. Of course, here we have, first of all, the ability of Intel integrated graphics to work normally in casual and graphically uncomplicated games. network games Oh. However, in fact, the path that Intel processor video cores have made is truly fascinating. Over the past five years, their productivity has increased no less than 30 times. This allows Intel to claim that its processors with flagship variants of integrated graphics accelerators have better performance than approximately 80 percent of the discrete graphics cards found in current consumer computers.

However, in fact, such words from Intel representatives most likely embellish the reality somewhat. For example, if you look at the statistics of video cards used by gamers on the Steam service, it turns out that the share of mid- and high-end video cards from AMD and NVIDIA, which are probably more productive than the most modern version of Intel Iris Pro, is at least 31 percent. But still, Intel is probably not far from the truth, because the Steam service does not take into account the huge army of players who prefer Farm Frenzy to AAA shooters. Be that as it may, modern Intel graphics cores are indeed capable of offering very impressive theoretical performance. In the table below we show the theoretical power of common graphics solutions in comparison with the graphics of Skylake processors in older versions of GT4 and GT3. From these data it follows that the older version of the most modern graphics core is capable of competing with the Radeon R7 250X and GeForce GTX 750 in its power, which looks truly grandiose.



However, there is a good reason why such an assessment of the power of Intel integrated graphics can be questioned. The fact is that Intel does not use its best graphics cores in processors oriented for use in desktop computers. The only exception in this regard was made in Broadwell, and desktop Skylake, at best, is equipped only with GT2-level graphics, which is far from Iris and Iris Pro and belongs to the HD Graphics class. Older versions of integrated graphics only fit into mobile processors with a thermal package of 15-28 W. And this leads to the fact that often older built-in video accelerators in reality are forced to operate at lower clock speeds, not reaching the peak performance that they are capable of in theory.

But one thing is certain. Regardless of what part of the current graphic cards capable of overtaking Intel video cores - be it 50, 70 or 80 percent - the company for last years was able to cover a very long distance. And this had a significant impact on the entire market as a whole. Users, in fact, had to completely say goodbye to entry-level discrete video cards - the need for their existence has disappeared almost completely. In addition, in the very near future, Intel will obviously be ready to strike at the positions of AMD hybrid processors. Those Intel processors that are equipped with eDRAM memory are already faster than the older Kaveri and Carrizo models in 3D modes. And in the future, with the release of Kaby Lake generation processors, Intel plans to significantly expand the range of such offerings.



However, let's not look beyond the horizon, but try to analyze what today's Intel integrated graphics can offer for desktop systems. Has its power really become enough to make it possible to do without a discrete video accelerator? In this review, we tested a pair of inexpensive LGA 1151 Core i3 processors of the Skylake generation and compared the speed of the HD Graphics 530 video core they contain with the performance of alternative solutions.

Skylake graphic architecture. Details

The role of graphics cores built into processors is increasing every year. And this is due not so much to the growth of their 3D performance, but to the fact that the built-in GPUs are taking on more and more new functions, such as parallel computing or encoding and decoding multimedia content. The Skylake graphics core was no exception. Intel classifies it as the next ninth generation (counting from Intel 740 discrete accelerators and Intel 810/815 chipsets), and this means that it contains a lot of surprises. However, it’s worth starting with the fact that the GPU implemented in Skylake, like its predecessors, retained the traditional modular design. Thus, we are again dealing with a whole family of solutions of different classes: based on the existing building blocks of a new Intel generation can assemble GPUs with radically different performance levels. This kind of scalability in itself is not new, but in Skylake not only the maximum performance has increased, but also the number of available graphics core options.

Thus, the Skylake graphics core can be built on the basis of one or several modules, each of which usually includes three sections. The sections combine eight actuators, which handle the bulk of graphic data processing, and also contain basic blocks for working with memory and texture samplers. In addition to actuators grouped into modules, the graphics core also contains a non-modular part responsible for fixed geometric transformations and individual multimedia functions.


At the very top level of the hierarchy, the Skylake graphics core is very similar to the core implemented in Haswell. However, with the introduction of the new microarchitecture, Intel somewhat revised the internal structure of the graphics core (strictly speaking, this happened back in Broadwell), and now each GPU section has 8, not 10, actuators, and the graphics module combines three, not two blocks. As a result, the availability of cache and texture units has improved for graphic execution devices, which simply became one and a half times larger, and the number of execution devices themselves in various versions of the new graphics core has become a multiple of 24. If you delve into the details, it is not difficult to find other noticeable changes.

For example, the extra-modular part is now placed in a separate energy domain, which allows you to set its frequency and send it to sleep separately from the actuators. This means that, for example, when working with Quick Sync technology, which is implemented precisely by off-module units, the main part of the GPU can be disconnected from the power lines in order to reduce power consumption. In addition, independent control of the frequency of the off-module part allows you to better adjust its performance to the specific needs of the graphics core modules.

In addition, while the Haswell graphics core could be based on only one or two modules, having at its disposal 20 or 40 execution units (for energy-efficient and budget processors, one module could be used with sections disabled, which gave less than 20, the number actuators), Skylake can use from one to three modules with a number of actuators from 24 to 72.

Yes, yes, in addition to the usual GT1/GT2/GT3 configurations, the Skylake processor family has an even more powerful GT4 core, which can actually boast of having 72 actuators.



It is also necessary to mention that the GT3 and GT4 core variants can be further enhanced with an eDRAM buffer of 64 or 128 MB, respectively, which gives the GT3e and GT4e modifications. Broadwell processors were equipped with only one eDRAM option - 128 MB. In Skylake, this additional buffer not only changed the operating algorithm, becoming a “memory-side cache,” but also acquired some configuration flexibility. However, its design will remain the same - it will be represented by a separate 22 nm crystal mounted on the processor board next to the main chip.



The appearance of a stripped-down eDRAM chip with a capacity of 64 MB in Skylake should expand the scope of application of GT3e graphics. Broadwell and Haswell processors, equipped with an additional buffer, had a high cost and were intended exclusively for high-performance laptops and desktop systems. The smaller eDRAM die allows for more affordable Skylake variants with powerful GPUs, such as those intended for ultrabooks.

But the peak performance of the execution devices themselves in Skylake has not changed - each such device can perform up to 16 32-bit operations per clock. Moreover, it is capable of executing 7 computational threads simultaneously and has 128 32-byte general-purpose registers.



According to currently available data, the Skyklake graphics core will exist in seven different modifications, which have numerical indices from the five hundredth series:

HD Graphics 510 – GT1: 12 execution units, performance up to 182.4 GFlops at 950 MHz;
HD Graphics 515 – GT2: 24 execution units, performance up to 384 GFlops at 1 GHz;
HD Graphics 520 – GT2: 24 execution units, performance up to 403.2 GFlops at 1.05 GHz;
HD Graphics 530 – GT2: 24 execution units, performance up to 441.6 GFLOPS at 1.15 GHz;
Iris Graphics 540 – GT3e: 48 execution units, 64 MB eDRAM, performance up to 806.4 GFlops at 1.05 GHz;
Iris Graphics 550 – GT3e: 48 execution units, 64 MB eDRAM, performance up to 844.8 GFLOPS at 1.1 GHz;
Iris Pro Graphics 580 – GT4e: 72 execution units, 128 MB eDRAM, performance up to 1152 GFLOPS at 1 GHz.

While increasing the power of the graphics core, Intel took great care to ensure that there was enough memory bandwidth for its needs, even in configurations without additional eDRAM memory. On the one hand, Skylake has updated the memory controller, and now it is capable of working with DDR4 SDRAM, the frequency and bandwidth of which is noticeably higher than that of DDR3 SDRAM. On the other hand, the GPU has a new technology called Lossless Render Target Compression (lossless compression aimed at rendering). Its essence lies in the fact that all data sent between the GPU and system memory, which is also video memory, is pre-compressed, thus relieving bandwidth. The applied algorithm uses lossless compression, and the degree of data compression can reach twice the size. Despite the fact that any compression requires the use of additional computing resources, Intel engineers claim that the implementation of Lossless Render Target Compression technology increases the performance of the integrated GPU in real games by 3 to 11 percent.



Some other improvements in the graphics core also deserve mention. For example, the size of the native cache memory in each GPU module has been increased to 768 KB. Thanks to this, as well as by optimizing the architecture of the modules, the developers were able to achieve an almost twofold improvement in the fill rate, which made it possible not only to increase the performance of the GPU when full-screen anti-aliasing is enabled, but also to add 16x MSAA to the number of supported modes.

One of the main guidelines for graphics built into an Intel processor has long been full support 4K resolutions. It is with this in mind that Intel is continuously increasing GPU performance. But another part also needs improvement - the interface outputs. It's no surprise that, like Broadwell processors, Skylake graphics core supports 4K output at 60 Hz via DisplayPort 1.2 or Embedded DisplayPort 1.3, 24 Hz via HDMI 1.4 and 30 Hz via HDMI 1.4. Intel Wireless Display or by wireless protocol Miracast. But in Skylake, partial support for HDMI 2.0 has been added to this list, through which 4K resolutions with a scan frequency of 60 Hz are available. However, to implement this feature you need some additional DisplayPort to HDMI 2.0 adapter. But HDMI 2.0 signal transmission is also possible via the Thunderbolt 3 interface in systems that have the appropriate controller.



Just like before, the GPU of Skylake processors is capable of outputting images to three screens simultaneously.

It is not surprising that with the growing popularity of new video formats, the Skylake graphics core has expanded its hardware encoding and decoding capabilities. Now, using the Quick Sync engine, it is now possible to encode and decode content in the H.265/HEVC format with 8-bit color depth, and with the involvement of GPU actuators, it is possible to decode H.265/HEVC video with a 10-bit color representation. Added to this is full hardware support for encoding in JPEG formats and MJPEG.



However, Skylake graphics belong to the new, ninth generation not only due to the listed changes. The main reason The reason was that it made significant changes in terms of supported graphics APIs. At the moment, the GPU of new processors is compatible with DirectX 12, OpenGL 4.4 and OpenCL 2.0, and later, as they improve graphics driver, future versions of OpenCL 2.x and OpenGL 5.x will be added to this list, as well as support for the low-level Vulkan framework. It is also appropriate to mention here that the new GPU implements full memory coherence with the processor, which makes Skylake a real APU - its graphics and computing cores can simultaneously work on the same task using common data.

Integrated graphics in desktop Skylake

Although the very fact of having an integrated graphics core in processors aimed at an enthusiast audience continues to cause heated debate, Intel is not going to abandon the practice of equipping its CPUs with an integrated GPU. Moreover, the proprietary graphics core continues to develop, acquiring new functions and increasing power. However, Intel still continues to artificially limit the performance of graphics cores that end up in desktop processors. Despite the fact that the company has developed four modifications of the built-in GPU for Skylake generation processors, only GT1 and GT2 graphics options are included in desktop products intended for use on the LGA 1151 platform. That is, junior modifications with the number of actuators no more than 24 pieces.



This is due to the fact that the modification of the Skylake-S processor design, which is aimed at desktop applications, is embodied in only two versions of the semiconductor crystal, which have two or four computing cores, and GT2-level graphics. More productive GPU options are focused exclusively on design modifications Skylake-U and Skylake-H, intended for ultrabooks and other mobile systems. However, there is a positive side to this. GT2 graphics are gradually gaining an increasingly significant place in desktop processors. If in Haswell generation processors such GPUs were installed exclusively in Core i7/i5/i3, now the HD Graphics 530 graphics core can also be found in Pentium-class processors.



In the following table, we have collected detailed information about those graphics core options that can be found in LGA 1151-version desktop processors available on the market.



An interesting point: in some inexpensive processors, the number of execution units in HD Graphics 530 is reduced to 23. This does not affect performance too much, but it adds some additional differentiation to the dual-core line.

There is not a single model in the Skylake desktop family with a more powerful graphics core than GT2. This means that the fastest desktop integrated graphics can currently be found in last-generation Broadwell processors, where Intel did not skimp on the GT3e core version with additional eDRAM cache.


Skylake has nothing like this in its arsenal, and the graphics core works directly with DDR3L/DDR4 memory. However, the progress in performance compared to the Intel HD Graphics 4600 core, which was used in older models of the Haswell generation, is very noticeable: the number of execution units has increased by 20 percent, the volume of internal buffers has increased, and in addition, graphics have received technology at their disposal texture compression when working with memory. All this, naturally, should have a positive impact on productivity.

How we tested

The purpose of this testing was somewhat different from what tasks we usually set for ourselves. In this material, the main character was the built-in graphics Intel core HD Graphics 530, which is present in the vast majority of processors for the LGA 1151 platform. In our practical tests, we tried to answer two questions. Firstly, is the performance of such graphics sufficient to “pull” at least an entry-level gaming system? Secondly, we compared the performance of the HD Graphics 530 with the integrated graphics cores found in other processors. First of all, with Intel HD Graphics 4600 and Intel HD Graphics 4400, which are present in Haswell, and secondly, with the integrated graphics cores from AMD, which are found in processors of the A10 and A8 families.

In order for the comparison to take place between options of the same price category, we selected only representatives of the Core i3 series from Intel processors to participate in this testing. It is these processors that can be directly opposed to AMD's APUs without resorting to additional reservations.

Two more somewhat atypical participants were also involved in testing. Firstly, this Core processor Broadwell generation i5-5675C. This Intel processor currently has the most powerful GT3e graphics core among all its desktop counterparts. Formally, its graphics are called Iris Pro Graphics 6200, but in fact it includes 48 actuators operating at a frequency of 1.1 GHz, enhanced by an additional 128 MB eDRAM memory.

Secondly, in the diagrams you will also find the results of the NVIDIA GeForce GT 740 discrete video accelerator with 1 GB of GDDR5 memory. Participation in the tests of this video card is due to the need to obtain some kind of “reference point” for comparing integrated GPUs with more familiar benchmarks. The GeForce GT 740 was tested on a platform built on a Core i3-4370 processor.

As a result, all configurations participating in this study were composed of the following set of hardware components:

Processors:

Intel Core i3-6320 (Skylake, 2 cores + HT, 3.9 GHz, 4 MB L3, HD Graphics 530);
Intel Core i3-6100 (Skylake, 2 cores + HT, 3.7 GHz, 3 MB L3, HD Graphics 530);
Intel Core i5-5675C (Broadwell, 4 cores, 3.1-3.6 GHz, 4 MB L3, 128 MB eDRAM, Iris Pro Graphics 6200);
Intel Core i3-4370 (Haswell, 2 cores + HT, 3.8 GHz, 4 MB L3, HD Graphics 4600);
Intel Core i3-4170 (Haswell, 2 cores + HT, 3.7 GHz, 3 MB L3, HD Graphics 4400);
AMD A10-7870K (Kaveri, 4 cores, 3.9-4.1 GHz, 2 × 2 MB L2, Radeon R7 Series);
AMD A8-7670K (Kaveri, 4 cores, 3.6-3.9 GHz, 2 × 2 MB L2, Radeon R7 Series).

CPU cooler: Noctua NH-U14S.
Motherboards:

ASUS Maximus VIII Ranger (LGA1151, Intel Z170);
ASUS Z97-Pro (LGA1150, Intel Z97);
ASUS A88X-Pro (Socket FM2+, AMD A88X);

Memory:

2 × 8 GB DDR3-1866 SDRAM, 9-11-11-31 (G.Skill F3-1866C9D-16GTX);
2 × 8 GB DDR4-2133 SDRAM, 15-15-15-35 (Corsair Vengeance LPX CMK16GX4M2A2133C15R).

Video card: Palit GT740 OC 1024MB GDDR5 (NVIDIA GeForce GT 740, 1 GB/128-bit GDDR5, 1058/5000 MHz).
Disk subsystem: Kingston HyperX Savage 480 GB (SHSS37A/480G).
Power supply: Corsair RM850i ​​(80 Plus Gold, 850 W).

Testing was performed on the Microsoft Windows 10 Enterprise Build 10586 operating system using the following set of drivers:

AMD Chipset Drivers Crimson Edition 15.12;
AMD Radeon Software Crimson Edition 15.12;
Intel Chipset Driver 10.1.1.8;
Intel Graphics Driver 15.40.14.4352;
Intel Management Engine Interface Driver 11.0.0.1157;
NVIDIA GeForce 361.75 Driver.

3D part performance

To get a preliminary picture of performance, we used the popular synthetic benchmark Futuremark 3DMark.






The picture turns out to be quite pronounced. The new Intel HD Graphics 530 graphics core has significantly higher performance compared to the GPUs that were built into Intel Haswell processors aimed at desktop applications. However, the increase in performance is not of a qualitative nature. The result of desktop Skylake is still lower than that of AMD APUs of the A10 and A8 class. The real star in these tests is the Core i5-5675C, which has the fundamentally better Iris Pro Graphics 6200 GT3e level. Unfortunately, there are no such solutions in existing processors for the LGA 1151 platform simply does not exist.

Let's now turn to the results obtained in popular and modern games that impose quite serious demands on the performance of the graphics subsystem. In testing, we tried to determine whether the Intel HD Graphics 530 is powerful enough to play in FullHD resolution with at least the minimum image quality settings.












The results show that despite the progress that has occurred, Intel HD Graphics 530 can only be suitable for modern games when choosing lower resolutions. Yes, compared to Intel HD Graphics 4600, a new version of the built-in graphic accelerator has become about 30 percent faster, but it’s not possible to get 25-30 frames per second on the graphics of desktop Skylake. In other words, for entry-level gaming systems, the more suitable processor is still the AMD A10 - its Radeon R7-class graphics core is about 40 percent faster than the HD Graphics 530. Well, don’t forget about the existence of Broadwell. Among desktop chips, this particular CPU can offer the highest graphics core performance. And this is quite enough even for the latest AAA games.

A separate point in our testing is measuring performance in popular online games, which usually have less stringent requirements for GPU performance.












For most online games, modern integrated graphics have a sufficient level of performance. Almost everywhere, performance in FullHD resolutions is such that you can even set the picture quality to medium or even high. And in some places you can comfortably play on the built-in GPU even with settings close to the maximum. The relative picture is no different from what we saw above. The best performance is offered by Broadwell with an integrated graphics core Iris Pro Graphics 6200. However, processors of this type are relatively expensive. The junior Broadwell model in the LGA 1150 version will cost $277, and therefore it is hardly suitable for a budget gaming computer. If you choose between Intel Core i3 and AMD A10, it is better to choose the “red” offer - from a graphical point of view, it is more productive. At the same time, the significant progress that is happening in Intel GPUs cannot be denied. They are increasing their speed at a very noticeable pace. And between the performance of the new HD Graphics 530 core and its predecessor HD Graphics 4600 there is a whole gap of 40-50 percent.

Playing video

Let's now check how well modern graphics cores cope with playing video content in common formats. In fact, this is a very important part of the study. Thus, video playback in 4K resolution with high bitrates can often be carried out on general-purpose processor cores only in sufficiently powerful configurations. Therefore, in modern GPUs, developers are trying to add special hardware engines that relieve the load on the computing cores. It must be said that Intel graphics cores are at the forefront of this process - they usually do better with hardware video acceleration than competing GPUs. And even Haswell processors with an Intel HD Graphics 4600 or HD Graphics 4400 graphics core handled playback of video in 4K resolutions, including those encoded in the HEVC format, tolerably well. However, in Intel HD Graphics 530 the video engine has been improved again.

To evaluate the changes that have occurred and compare the performance of different processors when playing video, we traditionally use the DXVA Checker test, which plays video at the highest possible speed and records the resulting decoding speed. Decoding of the video stream was performed using the LAV Filters 0.67.0 and madVR 0.90.3 libraries.



Playing FullHD video in the traditional AVC format does not cause any problems. However, as you can see, the performance of the Intel HD Graphics 530 compared to the Intel HD Graphics 4600 has dropped here. However, in any case, Intel GPUs are noticeably superior in performance when playing video to both the discrete GeForce GT 740 and the latest modifications of the AMD A10.



The advantages of Intel's video engine are even more obvious when it comes to video in 4K resolution. AMD processors give up here - they do not have hardware support for accelerating playback at this resolution. Nevertheless, Intel GPUs from Haswell and Skylake processors produce approximately the same result, which indicates not only that they cope well with regular 4K video, but also that such solutions can display 4K video encoded with 60 frames per second.



If we move on to testing HEVC video playback, it turns out that only Intel graphics cores can decode it in hardware. Neither GeForce GT 740 nor processors AMD Kaveri H.265 format is not supported. In this case, its decoding is carried out in software, which requires quite high power processor, especially when it comes to 4K resolution.



When it comes to the need to decode 4K HEVC video, the advantages of the Skylake graphics engine are obvious. It is this that has the most complete capabilities when playing this format. This makes it possible to play even videos shot at 60 frames per second without loading the processor’s computing resources.

In other words, it is Skylake graphics that today claim to be an ideal option for use in home theaters and media centers. It is the most omnivorous, and the GT2 core with a good level of performance can be found today even in Pentium-class processors with prices starting from $75.

Energy consumption

One of the advantages of integrated systems, which became the topic of this article, is their lower power consumption and heat dissipation compared to systems equipped with discrete video accelerators. Such platforms are often purchased for reasons of minimizing maintenance costs and find their place in compact cases. Therefore, the issue of power consumption of processors with an integrated graphics core is by no means idle; this parameter can significantly influence the choice of a particular solution.

Considering that in this case processors with fundamentally different thermal packages are forced to take part in testing, we will only touch on the issue of energy consumption when loading exclusively on the graphics core, the frequency of which is practically independent of the maximum TDP restrictions. You can always find more detailed information about the consumption of certain processors under different types of loads in other reviews published on our website.

The following graphs, unless otherwise noted, show the total consumption of systems using integrated graphics accelerators (without a monitor), measured at the outlet into which the power supply of the test system is connected, and representing the sum of the power consumption of all components involved in it. The total indicator automatically includes the efficiency of the power supply itself, however, since the power supply model we use, Seasonic Platinum SS-760XP2, has an 80 Plus Platinum certificate, its influence should be minimal. During measurements of the load on the graphics cores, the Furmark 1.17.0 utility was used. To correctly assess energy consumption in various modes, we activated turbo mode and all available energy-saving technologies: C1E, C6, Enhanced Intel SpeedStep and Cool"n"Quiet.



It is very interesting that the best idle efficiency is achieved by integrated systems built specifically on Skylake generation processors. In this parameter, they are noticeably better not only in comparison with AMD’s offerings, but also than their predecessors – Haswell.



We got approximately the same result with graphics load. The consumption of the Skylake graphics core is noticeably lower than that of the previous generation Intel graphics, not to mention the AMD graphics, which consume twice as much. In other words, processors equipped with an integrated Intel HD Graphics 530 video core are perfect for cost-effective systems.

conclusions

If the question arises about what the built-in cores of modern mass-produced processors should be like, then you have to face two diametrically opposed opinions. Some users believe that GPUs built into the processor are overkill, and manufacturers are thus forcing the purchase of a completely unnecessary part of their own semiconductor crystal. The other part of the audience, on the contrary, would like to see mass-produced processors with more powerful graphics, which could allow the creation of at least entry-level gaming systems without the use of an external discrete video accelerator. Testing of the new version of Intel processor graphics HD Graphics 530 showed that the manufacturer cannot yet offer either one or the other in desktop CPUs. However, there is movement in both directions, and we are talking about quite active actions.

So, for users who do not want to overpay for integrated graphics in the processor, Intel recently launched a separate P-series of Skylake processors. These processors are not yet completely devoid of an integrated GPU, but contain a simplified GT1 class accelerator, which makes them slightly cheaper than chips with GT2 graphics. At the moment, the range of such processors includes only a couple of models, but, apparently, the matter will not stop there.

As for the supporters of productive on-chip graphics, they too cannot be fully satisfied yet. Although Intel talks about the amazing progress that has happened in the area of ​​​​integrated GPUs, and that integrated graphics can compete with many discrete graphics cards, all this applies primarily to the mobile market. Desktop processors of the Skylake generation do not yet have any Iris or Iris Pro accelerators, and they have to be content with only the mid-level HD Graphics 530 video core. Yes, such a core has become much faster than the HD Graphics 4600 used in Haswell processors for desktop computers, but still its performance is not sufficient to provide acceptable frame rates in modern games in FullHD resolution.

In other words, for budget gaming systems the more suitable choice continues to be hybrid processors AMD A10. Their graphics performance is clearly higher than that of the HD Graphics 530. Intel desktop CPUs with the HD Graphics 530 video core are only suitable for not too demanding online games.

However, if your area of ​​interest is not the gaming use of processors, but the creation of an HTPC or media center, then the Intel HD Graphics 530 shows itself from a very advantageous side. The GPUs of modern Skylake implement full support for hardware decoding of video content of all modern formats, which also copes well with 4K resolutions. AMD processors cannot offer anything like this, so in this case, Skylake processors are the best option. Fortunately, the HD Graphics 530 graphics core can today be found not only in Core-class processors, but also in cheap Pentiums.

10.07.2013

In recent years, Intel has been paying a lot of attention to improving the characteristics and performance of its own integrated graphics, in which it has noticeably succeeded. It is this fact that has significantly reduced sales of inexpensive discrete video cards. We decided to check what it can do new Intel HD Graphics 4600, which replaced HD 4000 in Haswell processors.


If just three to five years ago practically no one was interested in the issue of integrated graphics performance, since it was clear to everyone that it was needed exclusively for working in 2D and very outdated 3D applications, then in recent years the situation has changed a lot. For several years now, Intel has been paying no less, and perhaps even more attention, to improving the performance of its HD Graphics than to improving the performance of processor cores.

And it gives results. Incapable of anything, the budget option for those who don’t play games has gradually turned into a serious competitor to inexpensive discrete video cards. This significantly reduced the market share of AMD and nVIDIA solutions, and the former even revised the organization of its own lines of video cards, refusing to release budget-class solutions of the Radeon HD 7000 family. However, AMD indicates that this was done due to the fact that the company’s APUs provide similar performance with budget discrete graphics cards. But they won’t openly say that Intel graphics are also very competitive among low-end video cards.


As part of the test of the Intel Core i7-4770K processor, we decided to conduct a separate test of the graphics part, which is integrated on the Haswell crystal and is called Intel HD Graphics 4600, and decided to check what it is capable of? Moreover, in order to adequately assess the efforts of Intel engineers, we decided to pit our heads against the three latest generations of integrated graphics, and the most powerful versions at that. Separately, it was decided to check how the Intel HD 4600 will perform in comparison with the discrete video card GeForce GT 630. Interesting? So do we. But before moving on to the tests, let's find out what kind of graphics core is hidden in the Haswell crystal.

Intel HD Graphics 4600

Intel HD Graphics 4600 is not a completely new development, but an evolutionary development of the architecture that first appeared in the first generation Core processors based on Clarkdale and Arrandale cores in January 2010. It was then that Intel abandoned the classic architecture with separate pixel and vertex pipelines in favor of a progressive unified shader architecture. On this basis, regularly improving it, the company's engineers developed all subsequent versions of Intel HD Graphics, which was greatly facilitated by its modularity, that is, the ability to simple addition executive blocks. Thanks largely to this feature, as well as refinement of the technical process and minor improvements in architecture, the company releases processors with more powerful graphics every year.


Intel HD Graphics 4600 has already received 20 execution units, which in functionality correspond to stream processors in AMD GPUs and CUDA cores in nVIDIA GPUs. For comparison, the HD 4000 from Ivy Bridge had 16, and the HD 3000, which was the top graphics for Sandy Bridge, had only 12. The total number of ALUs in the new product was 80, while in previous model there were 64 of them.

Whatever one may say, but with equal frequency, computing power HD 4600 at the same frequency is 25 percent higher than HD 4000, which is very good, considering that only a little more than a year passed between the release of these solutions. But the number of rasterization and texturing units remained the same - 2 and 4 pieces, respectively. The fact is that ROP and TMU are very energy-consuming, and for integrated graphics this is a very critical point, unlike desktop cards.


The performance of the HD 4600 relative to the HD 4000 was also improved by increasing the frequency. But not much (again the issue of power consumption got in the way), up to 1250 megahertz versus 1150. But the GPU frequency at idle became noticeably lower - 350 megahertz versus 650, which made Haswell processors more economical in modes with partial loads.

But it’s difficult to do anything with the bandwidth of the memory subsystem. After all, like any other integrated graphics, Intel HD Graphics 4600 uses for its purposes not local, but system RAM, the channel of which must be shared with the processor. This seriously impacts graphics performance, which often has to operate with significantly larger volumes of data than processor cores. And the third level cache, which the HD 4600 uses on equal terms with the processor cores, will not help here, since its volume is too small. Therefore, the faster the RAM, the better the integrated graphics will feel. However, until the tests are carried out, we will refrain from concluding that the memory is a bottleneck that prevents the HD 4600 from developing.


By the way, Intel has a solution to the memory subsystem bandwidth problem, which is used in some Haswell mobile processors. A version of the graphics core called Intel Iris Pro Graphics 5200 can use fast eDRAM memory, the chip of which, with a capacity of 128 megabytes, is located directly on the processor substrate. Using it as an L4 cache, Iris Pro can cache critical data there, which helps offset the impact of low RAM bandwidth. By the way, it also has noticeably more execution units than the HD 4600 - 40 at once! However, we won’t talk about Iris Pro today; nevertheless, this solution deserves a separate article.


Let's return to the HD 4600. There are no significant changes in terms of supported APIs. Like today's best discrete graphics cards, the new Intel graphics support DirectX 11.1 (shader version 5.0), OpenGL 4.0 and OpenCL 1.2. Naturally, there is support for tessellation, HDR, full-screen anti-aliasing and other modern image enhancement technologies. Well, let’s not forget about the possibility of working with three monitors simultaneously. However, the HD 4000 also had it.

Thanks to the once again improved hardware video processing unit called QuickSync, Intel HD Graphics 4600 has become even more omnivorous and productive when working with video content. This applies to the speed of transcoding in applications that support QuickSynk (at the moment this is only MediaEspresso from Cyberlink), and watching movies in Ultra HD, which the HD 4600 handles easily and effortlessly even at high bitrates. We also note that it received support for Motion JPEG and SVC formats, which are gradually gaining popularity.

Theoretical performance calculations

Before moving on to the tests, let's calculate the theoretical performance of Intel graphics cores of three generations - HD 3000, HD 4000 and HD 4600 and the discrete GeForce GT 630, which will represent a budget version of the discrete video card. The new HD 4600, like the HD 4000, can perform 16 floating point operations per execution unit per clock cycle. The old HD 3000 performs only 12 operations, and each GeForce CUDA core handles 2 operations. Simple calculations give the following peak performance results:

HD 4600 – 400 GFLOPS
GeForce GT 630 – 311 GFLOPS
HD 4000 – 294 GFLOPS
HD 3000 – 194 GFLOPS

The situation is clearly not in favor of GeForce. However, when texturing the situation will be completely different. Here GeForce is already ahead due to the presence of 16 TMUs at once, but compared to its predecessors, the HD 4600 looks very solid. The general breakdown of scene shading speed is as follows:

GeForce GT 630 – 13 Mtex/sec
HD 4600 – 5 Mtex/sec
HD 4000 – 4.6 Mtex/sec
HD 3000 – 1.35 Mtex/sec

In terms of rasterization speed, GeForce again takes the lead, but not so much:

GeForce GT 630 – 3.2 Mpix/sec
HD 4600 – 2.5 Mpix/sec
HD 4000 – 2.3 Mpix/sec
HD 3000 – 1.35 Mpix/sec

We won’t talk about memory bandwidth, since Intel graphics dynamically share it with processor cores, while the GeForce GT 630 uses a dedicated array of fast GDDR-5. As you can see, this GeForce, judging by theoretical data, will be a difficult opponent for the HD 4600.

Tests

It's time to move on to the most interesting part - the tests. We'll start by comparing the performance of three generations of Intel graphics cores - HD 3000, HD 4000 and HD 4600. In our test, they worked as part of three top Intel processors: Core i7-2700K, Core i7-3770K and Core i7-4770K, respectively. GPU frequencies at maximum load were 1350, 1150 and 1250 megahertz.

The RAM frequency for all three processors was the same - 1866 megahertz, as was the operating mode - dual-channel. All tests in this group used minimum settings and a resolution of 1920x1080 without using AA or AF. In addition to 3DMark, which ran on standard settings. Also for all tests the version of DirectX used is indicated. Tests were not carried out in DirectX 11, as it is not supported by HD 3000.


Let's start traditionally with synthetic tests. The Cloud Gate graphics test from the new 3DMark shows a dramatic difference in performance. Intel HD 4600 was ahead of HD 4000 by 42 percent, and HD 3000 by 156 percent! Great start.


The Unigine Heaven test shows a slightly smaller difference in the performance of Intel graphics of the last two generations: the HD 4600 is 36 percent faster than its predecessor. The difference with HD 3000 is again more than twofold.


The game Crysis 2 showed an even greater increase: HD 4600 is almost one and a half times faster than HD 4000! The gap from HD 3000 is huge – 130 percent!


F1 2011 turned out to be a little less critical of outdated graphics. The difference between HD 4600 and HD 4000 is “only” 28 percent, and the gap between HD 3000 is slightly less than twofold. By the way, you can play this game on the oldest graphics from this list, and newer versions allow you to increase the settings and achieve better graphics while maintaining the same level of performance.


Metro 2033 rated the HD 4600's superiority over the HD 4000 at a respectable 39 percent. And the HD 3000 showed “strength”, lagging behind by only 66 percent, which in comparison looks almost like a victory for the old man. By the way, note that in DX10 mode, the Intel HD Graphics 4600 almost reached the playable FPS level. If you lower the resolution, you can enjoy Metro 2033 without any lag.


Tomb Raider at minimum settings can also be played on the integrated HD 4600. The same cannot be said about its predecessors - HD 4000 and HD 3000, which lagged behind by 42 and a record 131 percent, respectively.

The conclusion based on the results of the comparative test of Intel HD Graphics of the last three generations is not difficult to draw. The new HD 4600 from Haswell processors really was a big step forward in performance, noticeably ahead of its predecessors, despite the fact that the RAM bandwidth remained the same. What is most pleasing is the fact that she has achieved playable frame rates in recent games.

How will the Intel HD 4600 perform in comparison with the budget discrete graphics card GeForce GT 630, which is inferior only in peak performance, but has a noticeable advantage in texturing speed, rasterization and memory bandwidth? The last fact is especially critical, considering that we took a full-fledged GeForce GT 630 from ASUS with a 128-bit bus and fast GDDR5 memory. Let's check, first in synthetics. By the way, all tests of this unit were done at maximum graphics settings using DirectX 11, in Full HD resolution, and with forced 16x anisotropic filtering, but without anti-aliasing.


Few people in the editorial office were ready to bet that the HD 4600 would be able to compete on an equal footing with the GT630, and therefore the results of the synthetic tests were surprising. And a couple of 3DMark and Unigine Heaven showed, albeit a small, advantage of the latest generation Intel HD Graphics! Excellent result.


In gaming tests, the situation has changed, and the HD 4600 no longer dominates. But, nevertheless, it seriously lagged behind the GeForce GT 630 only in the game Metro 2033 - more than one and a half times. In the games Battlefield 3, Crysis 2 and F1 2011, the difference in performance is no longer so critical - 17, 8 and 9 percent, respectively. And in two games, the Intel HD 4600 showed itself to be better than the discrete GT 630, outperforming it in DiRT Showdown by 12 percent, and in the new Tomb Raider, by 22 percent! Very serious indicators that are quite capable of leveling out losses in other games.


Well, to top it off, a video transcoding speed test in the Cyberlink MediaEspresso 6.7 program, which allows you to evaluate the speed of video encoding using Intel QuickSync technology. As you can see, even comments are not needed here, progress is evident. The HD 4600 is a quarter faster than the HD 4000 and more than twice the video processing unit performance of the legacy HD 3000.

conclusions

Against the background of the Core i7-4770K processor part that did not perform very well in our tests, the new Intel HD Graphics 4600 graphics core became for us exactly the part that allows us to confidently say that Haswell processors are indeed a noticeable step forward compared to the previous generation of processors Core. The HD 4600 easily outperformed the previous generation HD 4000 graphics in all tests. Moreover, the average gap turned out to be a solid 39 percent! The year before last HD 3000 looks completely dull compared to the new product, on average losing more than twice as much. These results are an excellent demonstration that Intel engineers eat their bread for good reason.

Well, and most importantly, the integrated Intel graphics in the new generation have grown to the level where it is quite possible to play not only outdated, but also the latest games. In addition, the appearance of the HD 4600 has made it completely pointless to buy an inexpensive discrete video card such as the GeForce GT 630. As our tests have shown, their performance is extremely close, and installing such a video card will only bring increased power consumption, noise, and no real benefit. In addition, do not forget that the entire top-end Core i7-4770K processor, including the HD 4600 graphics core and other units, consumes 84 watts, and the GT 630 paired with a modest dual-core CPU will require at least 130 watts.


So, forget about cheap video cards, even if they are of the latest generation, and you can also throw away yours outdated solutions, even if it is a very solid-looking GeForce 8800 GTS 320 or Radeon HD 3850. They will not be able to significantly surpass the Intel HD Graphics 4600, with incomparable energy costs. And one more important point: you can get this powerful integrated graphics not only in the top-end Core i7-4770K, which we tested, but also in much more affordable Haswell processors, including Core i5, and in the future Core i3.