Card test 750. DMark Vantage: Feature tests. Temperature and noise level

The author has already admitted more than once that studying the characteristics and writing reviews of components related to the budget segment causes him much more interest than working with top-level hardware. The reasons for this have also never been hidden: budget video cards, processors and other devices always cause greater intrigue and greater excitement for researchers than flagship and similar models.

Why? Yes, everything is very simple: when you buy, for example, a video card from the top price segment, in any case you get performance sufficient to run any current games in the current resolution. Yes, compared to competitors, the price/performance ratio may differ, there may be some variation in temperatures, noise levels, overclocking potential, and so on... but still, a top is a top, and a flagship is a flagship.

IN budget segment everything is completely different. Here, a step of 500-1000 rubles can radically change the gaming experience - and (which is especially interesting!) it is not at all a fact that a cheaper video card will be worse, but a video card for a higher price is clearly better than the one that was originally planned for purchase!

However, this situation is typical for budget hardware always and at any time. But if you remember that it’s 2017 (even if it’s just the last few weeks), then budget-class video cards acquire a very special value. And not only for the author, but also for most readers.

The outgoing year was memorable for us in many ways: to count the main trends, the entire authorial staff of the Club of Experts would not have enough fingers on their hands. But, of course, one of the most noticeable events, which equally affected the authors, the resource administration, employees of the CSN store chain and ordinary buyers, was the shortage of “above average” class video cards on free sale.

The reason for the deficit was the avalanche-like growth in the rate of cryptocurrencies - according to the EXMO exchange, for example, the cost of the main currency bitcoin at the beginning of April was $1,050 per coin, in September it grew to $4,500 and then did not slow down (today they give almost $19,000 for one coin !).

As a result, a huge number of people rushed into mining, who had not previously considered this activity as a source of income. And sales of video cards began to resemble the notorious “grabber” of 2014: everything that could be somehow used for mining cryptocurrencies was completely swept off the shelves. Why, they even cleaned out the departments of discounted goods from the shelves, a visit to which just six months earlier by especially aesthetic gamers was perceived as the same as recognition of a gay orientation in an orthodox environment.

Not to mention the fact that the very picture of empty shelves in the 21st century caused quite a break in the pattern among older buyers, and mixed feelings among the older generation; buying a gaming video card sometimes became an impossible task. Prices, which had already jumped up due to the collapse of the national currency, had already broken through the second ceiling, but even the availability of money did not guarantee a purchase! The cards simply did not reach retail stores: wholesale quantities of devices were bought up literally “on the fly” by the owners of especially large mining farms, and often without even being shipped to a warehouse, but literally “from board to board.”

Of course, video card manufacturers, wholesale suppliers and even retail stores took some “barrier measures” to ensure that video cards reached players in at least some extent. It is worth recalling at least the restriction introduced by the DNS on the number of video cards sold “to one person”, or the priority for video cards for assembled system units by limiting retail sales.

The situation has improved slightly with the appearance of special series of video cards designed purely for mining. In particular, the manufacturers, without particularly advertising this fact, transferred Samsung memory chips, especially beloved by miners, there, leaving in the “gamer” series Hynix, Elpida and Micron, which are less effective in crypto mining. Coupled with an increase in the volume of issued cards, the deficit was slightly reduced. Prices have not returned to the “pre-frenetic” level, but at least now in the store you can actually buy those card models that were previously unavailable even for ordering.

As a result, by the end of the year, a shaky and shaky, but still equilibrium was established in the market. There are cards for sale, and to get them you just need to pay money to the cashier. But cryptocurrencies are not slowing down their growth: moreover, not so long ago Bitcoin set another historical record, and now they give one million rubles for one coin.
Where the pendulum will swing in the near future: towards the collapse of cryptocurrencies, the end of the shortage of cards and the return of their prices to normal levels, or towards the second wave of mining fever - the author is careful not to make predictions.

The topic of today's article is much less provocative. So, what to do if you are building a PC and need to install at least some temporary video card into it? What to do if you want to play new games, but because of the post-feverish prices for video cards, your wallet hides inside your pocket and renders active resistance upon detection?

Naturally, pay attention to video cards that, for one reason or another, are not of interest to miners, and remain on free sale even at the very peak of cryptocurrency production.

Meeting the participants

Actually, there can be only two such reasons. Either mining on these video cards is impossible in principle - just as, for example, mining Ethereum currency is impossible on cards with on-board memory less than 3 gigabytes - or it is simply not economically viable.

Thus, when mining the ZCash currency, the most powerful video card participating in testing produces, depending on the conditions, from 80 to 100 megahashes. That is, mining is possible in principle, there are no hardware or software restrictions for this. But in order to receive real profit, and to receive it on time, and not when the exchange rate has already changed ten times, you need not one or even two such cards, and not one dozen. But in this case, you will need a lot of motherboards, processors, power supplies, homemade or purchased racks... in a word, it is much easier and more profitable to order several cards of an older model.

In today's article we are interested in the following video cards:

There is, of course, also a GTX 1050 with RX 560, but with them everything is clear: both options are very worthy and deserve to be purchased if you have the opportunity and desire. However, the price of these cards in today's realities is rapidly approaching the 10,000 ruble mark, which does not allow them to be classified as budget.

So, the following options remain: the “old-timers” of the market represented by the Radeon RX 460 2gb and GeForce GTX 750 Ti, and the “newbies” Radeon RX 550 and GeForce GT 1030. The characteristics of the first have already been studied in more than a dozen tests and reviews, but with the second pair of contenders for the title of entry-level gaming card, everything is not so obvious.

Radeon RX 550. Who are you, Mr. RX?

It’s no longer a secret that the Radeon RX 500 series is, albeit deep and effective, but still only a restyling of the RX 400 series. The new products have learned to overclock better and more efficiently, the base frequencies have increased, and the card design flaws made by vendors in the series have been corrected RX 400... but if you look only at the passport parameters, the difference between the RX 580 and RX 480, as well as the RX 570 and RX 470 is difficult to notice.

An absolute novelty in the RX 500 series is only one card, and the youngest in the family - namely the RX 550.

In the previous generation, a similar card simply did not exist: in the budget segment, the remnants of previous generations went into battle: the Radeon R7 360, based on the Tobago chip (a restyling of the Bonaire chip, first introduced in the Radeon HD7790), and the Radeon R7 350, which was actually renamed to Cape Verde chip.

Accordingly, the RX 550 is the first major update budget AMD video cards for a very long time. According to the company's idea, this particular card is designed to bring all the advantages of the Polaris architecture to the entry-level segment.

In addition to modern architecture and the finally final transition of lower-end video cards to gddr5 memory, the RX 550 is capable of offering a 128-bit wide memory bus, which “on paper” will be a tangible advantage over the competing GeForce GT 1030. But the number of universal shader processors and texture units cut in comparison with the RX 560 exactly twice: 512 and 32 pieces, respectively. It is quite obvious that no matter what the performance of the RX 550 is compared to its direct competitors, it cannot be compared with its older brother RX 560. The gap between these cards promises to be such that even overclocking the younger chip will not allow it to “compete” with the older one.

GeForce GT 1030. Minicar, but supercharged?

If Nvidia and AMD are in Lately They reached agreement on something, only on the principles of forming their product lines. So, if we talk about budget cards, then Nvidia also did not have a modern solution based on the current architecture for an extremely long time.

The Pascal family actually ended at the GTX 1050, and the entry-level video card segment was dominated by cards based on the Maxwell architecture, and not its last incarnation. If you didn’t have enough money for the aforementioned GTX 1050, then it was proposed to buy either a GTX 750 Ti, which at a comparable cost was more than half as bad as the younger Pascal chip, or various variations on the GT 740 / GT 730 theme, which were not particularly performant at their best.

Naturally, given this context, the appearance of the GT 1030 on the Pascal architecture was only a matter of time.

Like the RX 550, the GP 108 chip is a “half” of the older solution (only in this case - the GTX 1050 Ti on a full-fledged GP 107 chip). There are 384 processors and 24 texture units (the GTX 1050 Ti has 768 and 48 units, respectively) here because this is the configuration of one GPC cluster - the main structural unit of chips based on the Pascal architecture.

Actually, the entire GP 108 chip consists of one cluster. In addition to 384 processors and 24 TMUs, it also offers two 32-bit memory controllers, which gives the output a 64-bit bus. Again, this will be one of the main counter-arguments of Internet commentators, although everyone has long known that the Pascal architecture, due to its effective compression algorithm, does not need a too wide memory bus.

On the other hand, the configuration and even the very structure of the GP 108 chip is painfully reminiscent of the GM 208, on which the GeForce GT 730 was based. And as we all know, the GT 730 could not even come close to compare with the GTX 750 Ti, much less the GTX 950. Accordingly, we can assume that the gap between the GT 1030 and GTX 1050 will be approximately the same.

Only higher frequencies can be recorded into the GT 1030 asset. The graphics chip operates at a base frequency of 1228 MHz, the frequency under load can be up to 1468 MHz, and this is without taking into account the GPU_boost technology, which, as we know from other models of Pascal cards, can easily add about 300 MHz to the chip frequency , if temperatures allow.

In addition, like the RX 550 for AMD cards, the GT 1030 marks the final transition of Nvidia budget cards to use gddr 5 memory, which should also compensate for the “insufficient” memory bus width. The memory operates at a frequency of 3000 (6000) MHz, and it is quite possible that faster memory will appear on non-reference design video cards.

Palit GeForce GT 1030 LP

Unlike the aforementioned GT 730, choosing the GeForce GT 1030 for testing or purchasing is much easier. It uses one type of memory and one variant of the data exchange bus, so the question “128 bit ddr 3 or 64 bit gddr 5” is a thing of the past. When choosing a test sample, the author was guided by only two criteria: the presence of active cooling and a reasonable price.

The low-profile option from Palit fully meets both criteria. The cooler here, although tiny, and inherited from the GT 730 without any changes, is still active, and should cope with the 30-watt GP 108 chip without any problems. As for the price - 5,199 rubles can be called a reasonable compromise. There are cheaper options with passive cooling, but the author would not hope for their effective overclocking. There are more expensive full-size versions, but buying an entry-level graphics card costs 5,999 rubles- an activity clearly for true connoisseurs. It would be nice if only the RX 550 were competitors - but there’s still a little more to add to the RX 460 2gb!

Packaging and equipment

The video card comes in a compact box designed in the vendor’s corporate style. The packaging is absolutely recognizable, but does not carry any special information - however, this can be attributed to the marketing features of Palit, which has never been a very “flashy” brand.

There is also little information on the reverse side - only the minimum system requirements and a list of characteristics.

The delivery set is expectedly ascetic: in addition to the card itself, wrapped in a bubble polyethylene bag for additional protection(a plus for the vendor, since the low-profile card wobbles noticeably in a box that is wide for it), inside you can only find a disk with drivers and installation instructions.

There is no adapter for additional power here, since a card with a 30-watt power package, by definition, does not require it. Adapter for connecting a monitor with VGA connector no, but the GT 1030, like all modern cards, simply does not support this interface.

The only big negative is the lack of a replaceable low-profile strip included in the kit. Yes, GT 1030 are made in low-profile versions solely for the sake of savings, but this savings can also be turned into a plus by equipping the card with a bracket for installation in Slim Desktop format cases! And what’s typical is that some other vendors do include the bar in the kit.

Appearance and Design

The card from Palit is actually a variation on the theme of the GT 1030 reference design. More precisely, the board completely replicates the reference, while Palit engineers are exclusively responsible for the cooler, which the new product inherited from the previous model - GT 730. However, this is rather a plus, since it is a reference sample The GT 1030 featured a much smaller fan, which would certainly have had a negative impact on acoustic comfort.

The card's power supply system is two-phase, with one phase each for the graphics chip and memory. This should be more than enough for the GT 1030, so you shouldn’t write this down as a disadvantage of the card. The board provides landing pads for one more phase, but it is extremely unlikely that they will ever be used by any of the vendors.

The back side of the card is devoid of any significant elements; even the memory chips (two in number) are assembled on the front side under the radiator. One of the downsides is the presence of a warranty seal on one of the mounting screws. If cleaning the radiator from dust in this case does not require its removal, then the thermal paste can be replaced without losing the warranty only at a service center.

However, how relevant this circumstance is for the GT 1030 is up to the potential owner to decide.

The set of interface connectors is completely similar to the reference GT 1030, and most GT 1030 in general. One HDMI port and one DVI-D (digital signal only!) - this is quite enough to connect any modern monitors. Moreover, it is enough for simultaneous connection monitor and TV if the PC will be used as a media center.

In fact, the GT 1030 doesn’t need much more: it’s extremely difficult to imagine using this card for gaming on multiple monitors. A native Display Port connector might not be a bad idea, but the proportion of monitors with this interface is much smaller than output devices that communicate with the video card via HDMI and DVI.

As for VGA - yes, it would be relevant for a budget video card. However, all video cards based on Pascal and Polaris chips do not support the analog interface; this is not the vendor’s fault.

In the test system unit, the video card looks like this:

As you can see, the GT 1030 takes up minimal space: one expansion slot is thick and slightly longer than a PCI-e slot. Any compatibility problems in terms of dimensions are impossible here in principle. All that remains is to complain about the lack of a low-profile bar - the owners compact enclosures she would be very helpful.

Gigabyte Radeon RX 550 D5

Choosing the RX 550 actually turns out to be even easier than in the case of the GT 1030. All of these cards are equipped with active cooling systems (although the power consumption of 45 watts in normal mode allows the use of a passive radiator), the efficiency of which is more than enough for this chip.

When choosing an RX 550, you should be guided by only one parameter: its price. And the lower it is, the better. Of course, there are versions of the RX 550 with 4 gigabytes of onboard memory, for which they ask for 7-7.5 thousand, but excuse me: buy an RX 550 at the price of an RX 560, a card that is not 10-15% faster, but literally twice as fast - this is the finish. Even the author does not have enough sense of humor to find a suitable analogy for this situation.

The cheapest version of the RX 550 at the time of writing was the D5 from Gigabyte. 6,199 rubles- this is still relatively tolerable, and one can imagine a situation where a potential owner could not add up to an RX 460, or even more so an RX 560.

The only question is what the RX 550 offers for the money.

Packaging and equipment

The dimensions and packaging design are virtually identical to other budget (and even not so budget) Gigabyte models. For example, the Radeon RX 560, previously tested by the author, was supplied in a similar box.

The information content, which is again traditional for Gigabyte, is at its best: product features, proprietary technologies, interface connectors and other important information described in as much detail and clarity as possible.

The delivery package does not differ from the GT 1030 discussed above: installation instructions, a disk with drivers and a card in an antistatic bag with a seal. Nothing superfluous, but a budget video card should first of all be affordable, and extra elements kit can raise this very price to a level of more quick solutions- which, in fact, is what we see in the example of the RX 550 from a number of other vendors.

The absence of adapters for additional power supply and a VGA monitor in the kit can be explained in the same way: additional power supply for the RX 550 is not needed even with extreme overclocking, and analog interface The card does not support, like all modern graphics accelerators. To connect a monitor with VGA you will need an active signal converter.

Appearance and Design

Externally, the RX 550 is no different from other video cards from the D5 series - for example, from the GTX 1050. The same plastic casing, the same 90 mm fan with a proprietary impeller, the same dimensions. Considering that the D5 cooler copes well with the GTX 1050 and GTX 1050 Ti, we can expect equally efficient and quiet operation here.

The back side of the printed circuit board shows Gigabyte's own design - however, since reference versions are practically never found in nature, it would be hard to expect anything different. The board, by the way, is completely identical to that used in the more expensive Gaming versions, so you can immediately hope for good overclocking.

The younger version also got a VRM from Gaming, assembled according to the “3+1 phase” scheme. By the way, the VRM of more expensive versions of the RX 550 from Asus works according to the same scheme. Given the modest power consumption of the RX 550, three power phases on the GPU are more than enough, so there are no obstacles to overclocking here either.

It is also worth noting the traditional Gigabyte lack of warranty seals on the cooler mounting screws - replacing thermal paste and cleaning the radiator from dust is possible without visiting a service center.

The only obvious obstacle to overclocking is, alas, the standard radiator. Although it is similar in shape, length and width to the radiators of other cards from the D5 series, in reality it turns out to be one and a half times lower in height than the same GTX 1050 and GTX 1050 Ti. It is quite obvious that this will not have the best effect on cooling efficiency. The question is whether such a “truncated” radiator will be enough for the RX 550.

The set of interface connectors is typical for the RX 550 and other cards based on the Polaris architecture: DVI-D (digital signal only!), HDMI 2.0b and DisplayPort 1.4. In other words, everything you need to connect any modern output devices and transmit high-definition content.

Again, opportunity direct connection VGA is missing, but this is not the fault of the vendor or the card - analog signal do not support any cards of the GeForce 1000 and Radeon 400/500 families.

Like any card from the Gigabyte D5 series, the RX 550 is very compact and barely exceeds the dimensions of a PCI-e x16 slot. In other words, it's hard to imagine a standard ATX case that this card won't fit into. It is worth noting that the height of the RX 550 occupies even less than two slots, which means that installing expansion cards even in the next slot will not lead to lack of air and overheating.

Manli GeForce GTX 750 Ti

Although cards based on Maxwell chips (with the exception of GT 730 and GT 740) have long been discontinued, even today you can find GTX 750 and GTX 750 Ti on sale, and these will not be used or returned to the store, but the very best There are unsold remaining cards that are sold “on a general basis” with newer analogues.

Moreover, these reasons will be so general that the price of the cheapest GTX 750 Ti will be at the level of the RX 460 2gb, and more advanced versions have every chance of being sold at the price of the GTX 1050. So, the author personally saw the GTX 750 Ti and GTX 1050 on the same counter: the first cost 8 thousand, the second - 8.5.

Of course, given the huge performance gap, buying a GTX 750 Ti can hardly be called a rational decision, but it satisfies the conditions of this article. Miners lost interest in this card a long time ago, and it could be bought freely even at the height of the cryptocurrency fever.

Packaging and equipment

Unlike the two video cards discussed above, the Manli product is aimed at OEM assemblers, and there is no packaging as such. The card is supplied in a bubble polyethylene bag:

The delivery set is similar to the two previous tests:

A disk with drivers and installation instructions - however, for the OEM segment you don’t need anything else. The lack of an adapter for additional power is explained by the fact that the card does not require it, the lack of an adapter for VGA is, again, aimed at the OEM segment. In versions intended for the end customer, the kit includes both an adapter from DVI to VGA and an adapter from mini-HDMI to a “full” connector.

Appearance and Design

The card is extremely compact for the GTX 750 Ti - it is barely longer than a PCI-e x16 slot. But despite its compact dimensions, the card boasts the largest radiator and the largest fan of all test participants. The blade span is the same 90 mm as that of the RX 550 by Gigabyte, but the profile height is almost standard 20 mm, which should affect cooling efficiency and potentially acoustic comfort.

The PCB design is typical for the GTX 750 Ti, in fact it is a reference without any significant changes. The power system retained the “2+1” scheme, which, at first glance, is not enough for a not very energy efficient (by the standards of today, of course) chip. However, like the reference version, the card does not have an additional power connector, which means that even when overclocked it cannot consume more than 75 watts. With this condition, such a power subsystem can be called quite sufficient.

The set of interface connectors is also typical for the GTX 750 Ti: DVI-I (digital and analog signals), DVI-D (digital only) and mini-HDMI. Given the reference design with a single-tier mounting strip, it would be difficult to fit another set of connectors.

Due to its age, the GTX 750 Ti is the only card in today's testing that supports analog output. However, the author will not rush to call this an advantage of the card. If you really need VGA, and you are ready to buy a previous generation card for it, it would be wiser to look for the GTX 950/960, which are much faster and, although not very successful, still compete with the GTX 1050 and GTX 1050 Ti.

As with other test participants, the GTX 750 Ti from Manli will not cause problems when installed in any ATX case. Its only drawback is that the card takes up a little more than two slots in height, which can become a problem when installing other expansion cards.

Powercolor RX 460 Red Dragon

The RX 460, like its younger relative the RX 550, today finds itself hostage to its price and positioning. Although the difference between it and the RX 560 is much smaller, and with the help of any RX 460 you can make an RX 560, but what is the point if the price of the old and new generation cards is the same?

Therefore, the RX 460 2gb should also be chosen based on its price and its relationship with its closest competitors - the GTX 1050 and RX 560 2gb. For the purposes of this review, the option from Powercolor was chosen, costing 7,199 rubles. Buy, for example, the version from Sapphire for 7,999 rubles or from Asus for 8,499 rubles- meaningless if available RX 560 with additional power connector costing from 7,999 rubles.

Packaging and equipment

The card comes in a rather large box by the standards of other review participants, designed in the corporate style of the Red Dragon line. The design is very catchy and recognizable, but the information content here lets us down:

There is no useful information about the characteristics of a specific product here; all text concerns only AMD proprietary technologies and system requirements. The box is probably unified with other versions of the RX 460 from Powercolor - this would explain the lack of information about the card model.

The delivery package is already quite traditional: a disk with drivers and installation instructions. This version of the RX 460 does not require additional power (which is a minus in this case), the analog signal is not supported - therefore, the corresponding adapters are not needed.

Appearance and Design

Compared to other Powercolor models, the card looks very modest. No dragons or mystical symbols - just a very simple plastic casing that covers the entire front part of the card. A 90mm fan with a profile height of 20mm is responsible for cooling. The design of the radiator is absolutely identical to the GTX 750 Ti and RX 550: it is a simple aluminum block with a central thermal column and petals extending from it.

The printed circuit board is own development Powercolor, and it is unified for all RX 460 and RX 560 models from Red version Dragon. The VRM is assembled according to a “2+1 phase” scheme, which is sufficient for cards without an additional power connector: the limitation for overclocking the RX 460 is not primarily overheating of the VRM, but rather a lack of power through the PCI-e x16 connector.

It should be noted that there are no warranty seals on the cooler mount. Prevention or replacement of the designated unit is possible here without visiting a service center while maintaining the warranty.

As mentioned earlier, there is no connector for additional power, and the side of the card is generally devoid of any special elements. By the way, it is worth noting that the plastic casing, which looks like an additional element of rigidity, is in fact in no way connected with either the printed circuit board or the mounting frame, and is attached only to the radiator.

The set of interface connectors is standard for cards on Polaris chips: DVI-D (digital signal only!), HDMI and Display Port - this is quite enough to connect any modern peripherals, use a PC as a media center or connect several monitors at once.

The RX 460 in the Powercolor version turns out to be the largest of the test participants. However, it is a full three centimeters smaller in length motherboard"truncated" ATX format, and therefore will fit into any case that the motherboard will fit into. The height of the card occupies exactly two expansion slots.

Temperature and noise level

Traditionally, the review will begin with a study of the heating and acoustic characteristics of the video cards in question - these aspects sometimes cause no less debate than gaming performance.

Let's start with temperatures. Measurements were taken in a room with open window at an air temperature of 24 degrees. The rotation speed of case fans was fixed at the maximum level. As a result, the following values ​​were obtained:

The coolest of the tested cards is naturally the GT 1030 - the 30-watt power package allows you to maintain a temperature of about 60 degrees even when cooled by a very compact cooler.

In nominal mode, the RX 550 can compete with the GT 1030 - in fact, it is even cooler, but this is to be expected: after all, the standard Gigabyte D5 cooler is much more efficient than the solution from Palit. However, with overclocking, the RX 550's power consumption and heat rise sharply, and temperatures under load are approaching those of the GTX 750 Ti and RX 460.

The GTX 750 Ti, due to the presence of a fairly large and efficient cooler, ranks third in terms of temperatures. Depending on the load, they stay within 68-73 degrees, which is more than enough for stable operation GPU_boost technologies.

The RX 460, alas, takes last place, exceeding GTX temperatures 750 Ti at 4-5 degrees.

But what about the noise level?

Measurements taken at night in a room with closed windows and no other noise sources showed the following values:

The GTX 750 Ti and RX 550 are the quietest - in fact, even when overclocked under load using FurMark, the sound of their operation is difficult to distinguish from the background system unit.

The GT 1030 nominally takes third place, although in reality it is still a big question who should give the last line. Yes, according to the sound level meter, the GT 1030 is quieter than the RX 460 from Powercolor, but the sound of the low-profile high-speed turntable is in fact very clearly distinguishable against the background of the system unit, and has a very strong irritating effect. Imagine the squeak of a mosquito hovering right above your ear - this is approximately it.

The RX 460 in Powercolor is a completely different case. Like the previously reviewed option from MSI, it has an extremely high noise level. Even without overclocking, the fan spins up to speeds above 2000 rpm, which is a lot for a 90mm high-profile turntable. However, the sound of the fan does not interfere with artifacts of a mechanical or electrical nature, and the high noise level is much less annoying than in the case of the GT 1030 from Palit.

Of course, everything said above applies only to tested copies. Variants of the same cards with different designs and different cooling systems will have different characteristics.

Frequency model and overclocking

The GTX 1030 LP from Palit, being an example of a reference design card, also shares the frequency model of the reference: there is no factory overclocking, the chip frequency is 1228 MHz, the frequency in boost mode is 1468 MHz. Since the card's temperatures are very far from overheating, GPU_boost technology raises the chip frequency to 1734 MHz.

The memory frequency on Micron chips is also the reference 3000 (6000) MHz.

Memory overclocking turned out to be not the most successful: however, this is rather typical for Micron. +1000 MHz - and artifacts right on the desktop, right up to the driver failure. +900 - Unigine Heaven test crashes in the first minute. +820 - occasional artifacts. The final overclocking value was +800 MHz, the final memory frequency was 7600 MHz.

Interestingly, the GT 1030 prioritizes overclocking the memory rather than the graphics chip. For example, a card with overclocked memory gave 770 points in Unigine Heaven, while overclocking the graphics chip only raised the result to 785 points. However, this is understandable: the 64-bit memory bus really limits the card, and memory overclocking allows you to compensate for its influence.

In addition, the card is clearly squeezed within the framework of its energy package, which leads to noticeable “swings” during overclocking. So, if you leave the memory frequency as standard, you can “add” +230 MHz to the chip. If you overclock both the memory and the chip at the same time, then this figure drops to +170 MHz. However, changing the Powerlimit parameter for the GT 1030 is simply not available.

The RX 550 made by Gigabyte D5, although it is a complete non-reference with an extremely powerful element base by the standards of the Polaris 12 chip, is also deprived of factory overclocking - apparently, the vendor saved it for the Gaming version. Accordingly, from the factory the card frequencies are 1183 MHz for the GPU and 1750 MHz for the onboard memory.

But with acceleration Gigabyte cards everything is much simpler and much more fun. The memory built on Elpida chips was a bit disappointing. 2000 MHz - and Unigine Heaven only goes up to scene 14, after which it freezes and the system stops responding to commands. 1900 MHz is stable and there is a noticeable increase in performance. 1950 MHz - the test passes, but the final score is lower than at 1900 MHz, there is suspicion of instability. The final value is 1920 MHz.

Overclocking the core to 1500 MHz, alas, failed; the test crashed in the first scene until the driver crashed and the frequencies were reset to factory settings. The same was observed at 1450 MHz. The card worked stably only at a frequency of 1410 MHz - however, this is still a huge increase compared to the standard 1183 MHz.

The GTX 750 Ti from Manli has a small factory overclock to 1033 MHz base frequency (1020 - standard). The boost frequency is 1111 MHz versus 1085 for the reference, while GPU_boost raises it to 1163 MHz.

Overclocking the GTX 750 Ti is also easy and effective. The limit of memory capabilities was +500 MHz (1600 MHz final frequency), on the chip - stable +200 MHz and 1363 MHz in boost. What actually turns out to be best result among all the GTX 750 Ti that the author has ever reviewed. But further overclocking of the chip is no longer stable: it is possible to get 1374 MHz, but under load the frequency begins to drop to 1350, or even 1337 MHz: the card clearly does not have enough additional power.

Overclocking the RX 460 without an additional power connector is always a difficult and sometimes thankless task. Even in nominal mode, this card lives literally “on the border” of the power that a PCI-e x16 slot can produce. Overclocking only aggravates the situation, and nothing can be done about it: if you take additional power from nowhere, then it will not arise from nowhere.

As a result, overclock graphics chip even up to 1300 MHz it was not possible. The stability limit turned out to be 1280 MHz; any further increase meant Unigine Heaven crashing with an error and the system freezing, and sometimes artifacts right on the desktop.

The only thing that pleased me was the memory on Hynix chips: 2000 MHz, limited for the RX 460 at the driver level, was taken simply and easily. In fact, there was even some frequency reserve left, but for obvious reasons it turned out to be impossible to realize.

Test system configuration and testing methodology

  • CPU: ;
  • CPU cooling system: ID-Cooling SE 214X;
  • Thermal interface: Arctic MX-2;
  • Motherboard: Gigabyte AB350-Gaming 3;
  • RAM: GEIL EvoX GEX416GB3200C16DC, 2x8gb;
  • Disk subsystem: SSD Western Digital WDS240G1G0A+ HDD Western Digital WD10EZRX-00A8LB0;
  • Frame: ;
  • Power unit: Corsair CX 750M.

All tests were carried out under Windows 10 64-bit version 1709 with the latest updates as of November 15, 2017. The selection of test applications included both synthetic benchmarks performed with standard settings and tests in games.

Synthetic tests were carried out with standard graphics settings. For single-player AAA games, medium graphics settings were selected, corresponding to standard presets; for online games, the maximum available settings were selected.

For GeForce video cards, driver version 388.59 was used, for cards of the Radeon family - 17.11.1. Synthetic tests of the RX 550 were carried out with driver version 17.12.1, since the card refused to pass 3Dmark with older versions.

Synthetic tests

Traditionally, the 3DMark 2013 test package opens the line of synthetics. In this version, Futuremark followed modern trends, and from a hardcore benchmark for top-end PCs, its most famous product is gradually turning into a universal system for testing platforms of varying degrees of mobility. Therefore, of the three benchmarks, we are only interested in one - Fire Strike, still capable of bringing even premium hardware to its knees.

The results in 3Dmark are quite expected: the RX 460 is obviously the fastest card, the GT 1030 the slowest. The RX 550 and GTX 750 Ti share second place.

Next in line is the benchmark Unigine Heaven, which has not received updates for a long time, but still remains quite demanding on video card performance.

Like all Unigine benchmarks, Heaven clearly gravitates towards Nvidia cards. As a result, the GT 1030 fails to catch up with the RX 550, but the GTX 750 Ti still performs better than the RX 460. However, this can be attributed to the excellent overclocking of the GTX 750 Ti and vice versa - the unsuccessful overclocking of the RX 460.

The latest Unigine development is a benchmark Superposition- takes us from the fantastic skies to the modest laboratory of a scientist obsessed with an idea that can overturn the laws of physics. But changing the scene scale does not mean more lenient system requirements! Quite the opposite - the second generation Unigine engine introduces modern post-processing effects and physical model, involved directly in the test scene.

In the new, heavier benchmark, the GTX 750 Ti is clearly inferior to the RX 460, but the GT 1030, on the contrary, overtakes the RX 550, although the advantage cannot be called devastating.

Tests in games

Assassin's Creed: Origins- a new part of the annual series, in which Ubisoft set its sights on recreating not just one city of a particular era, but an entire ancient world. For the indescribable atmosphere of mysterious pyramids, majestic palaces, gloomy tombs and cruelty that occurs under the wise gaze of monumental statues, the game can be forgiven for absolutely anything. If you have dreamed of adventures among deserts and oases since school and are a fan of Indiana Jones films, you cannot miss this game, especially since the graphics here are incredibly good and only facilitate your journey through the centuries.

The downside to the graphics are the system requirements, which are a huge leap up from Syndicate, the previous game in the series. You can play at medium settings with relative comfort only on the RX 460. With the RX 550 you will have to lower the settings or screen resolution, with the GT 1030 and GTX 750 Ti - both.

Batman: Arkham Knight- the final part of the trilogy from the Rocksteady studio, which, according to the plan, was supposed to become the most dramatic and tragic. ICHSH, became. But not in that sense. Despite all the merits of the plot and graphics, the game came out so crude and crooked that even reviewers had to wait for a bunch of patches before the results could be presented to the public.

The game is far from new, so all video cards here provide sufficient FPS for a comfortable game, even the GT 1030, albeit with reservations. The RX 550 and GTX 750 Ti share second place, the RX 460 is in the lead by a wide margin - everything is the same as in synthetic tests.

A new version of the most popular team shooter Counter-Strike has moved away from e-sports towards distributing skins and medals, but the graphics component of the updated Source engine and support from Valve allow this project to maintain interest.

Contrary to popular belief, CS:GO is not the most demanding game. If you are not a professional, and you do not need stable 200+ FPS under any conditions, you can play even on the GT 1030, and the difference between it and faster cards will not be colossal.

DOOM- a game that, if it didn’t educate, then definitely left its mark in the minds and hearts of more than one generation of gamers. One of the pillars of the shooter genre - what is it, the PC gaming industry itself! - unexpectedly returned with the most modern graphics and the most old-school gameplay, making players and critics scream with delight.

DOOM in OpenCL mode is not out of the blue considered Nvidia's domain. Cards with a green logo feel more than confident here, and what’s interesting is that this has the best effect on cards with Pascal architecture. A typical example: the GT 1030 demonstrates exactly the same performance as the GTX 750 Ti, although in terms of results in synthetics and other games it should be inferior.

Dota 2- a game that needs no introduction. The same nursery of crustaceans, the very place where you will be taught to truly love mothers. At the same time, SUDDENLY, it is also a full-fledged eSports discipline, not inferior to Counter-Strike and Starcraft, and also surpasses both in the size of the prize fund. It runs on the same version of the Source engine as CS: GO, which leaves its mark on the system requirements.

But with DOTA 2 at maximum settings, the GT 1030 copes much worse than the GTX 750 Ti and its direct competitor, the RX 550. If you can still play with calm gameplay, exterminating creeps and one-on-one duels, then a clash of several heroes, an invasion to the base and simply activating graphically beautiful abilities practically turn the MOBA into a turn-based strategy. The GTX 750 Ti looks much better and confidently beats the RX 550, the RX 460 looks even better, although the gap over the GTX 750 Ti is far from gigantic.

The third part of the role-playing series from Bioware, which managed to largely rehabilitate the studio after the crushing failure with the ending of the Mass Effect trilogy - and this is already an indicator. Like Battlefield 4, the game was created on the new Frostbite engine, which replaced the Unreal Engine. This means that the graphics here are as good as they are large-scale and epic. Dragon Age: Inquisition.

The game is far from new, and all tested cards can handle the Frostbite engine version 3.0. It's quite possible to play on the GT 1030, although the RX 550 will cope with the task much better. The younger AMD card cannot catch up with the GTX 750 Ti, but the GTX 750 Ti does not have enough strength to compete with the RX 460.

Fallout 4- a continuation of one of the most popular role-playing series in our country... alas, again from the pen of Bethesda Softworks. As always happens with this studio, the interactive sandbox was a great success, but the spirit of the post-nuclear wasteland, plot and semantic content were not very successful. However, graphically the game is more than good, but the hardware requirements are quite high.

Along with DOOM, Fallout 4 is an example of the most Nvidia-optimized game. However, the GT 1030 shows the worst result here; for a comfortable game you will have to lower the settings or screen resolution. The RX 550 shows quite playable FPS and shares second place with the GTX 750 Ti. The RX 460 is trying its best to break into the big league and has some reserve for increasing individual graphics settings.

FarCry: Primal- Ubisoft's boldest experiment... before the release of Assassin's Creed: Origins. A series known for its shooter mechanics no less than open world, was sent to a time when only a couple of millennia separates the main character from the nearest firearm. However, surviving in the ancient world among giant predators and equally dangerous bipeds really breathed new life into the series and dramatically invigorated the boring gameplay. System requirements have also increased, both for the graphics and processor parts of the PC.

Once again, the GT 1030 demonstrates a dismal result, the gap between it and the RX 550 is literally huge. The youngest of the AMD cards and the GTX 750 Ti are on par and share second place, the RX 460 is ahead of both by the same margin.

Hitman 2016- not so much a reboot of the series as a careful restoration of the original game mechanics that once allowed this series to create its own genre. Spacious levels with complex architecture and multiple ways to complete tasks are back, preparation and planning are back, and the work of artists and designers has reached a whole new level. But along with it, the system requirements of the game also jumped up.

The new Hitman turns out to be much more democratic than Assassin's Creed: Origins or FarCry: Primal. You can play at medium settings even on the GT 1030, and you won't notice much of a difference with the GTX 750 Ti. On the other hand, the RX 550 copes with the task is slightly better than both Nvidia cards, and the RX 460 flies somewhere at the top, not paying attention to the competition in the segment below.

Mass Effect: Andromeda- “allegedly a continuation” of one of the most iconic role-playing games of recent years, which made millions of people around the world once again dream of conquering space and exploring distant worlds. In order to save money, the development of the game was entrusted by the publisher to the Bioware division, which had never before worked on AAA-class projects, which inevitably affected the final result. However, graphically the game is quite good (if you don't look at the characters' faces) and, moreover, it works on the most current version Frostbite engine.

Playing on the Frostbite engine version 3.5 becomes a much more difficult task. You won’t be able to play at medium settings on either the nominal or overclocked GT 1030. The RX 550 and GTX 750 Ti are on par and provide minimally comfortable FPS: a slight decrease in the settings will allow you to achieve stable 40+ frames. The RX 460 does not require any compromises at all; its performance is sufficient for medium settings.

Metro: Last Light- continuation of one of the most successful shooters created in the post-Soviet space. In addition to very high-tech graphics, the game delivers an interesting plot, post-apocalyptic landscapes in which details familiar to every resident of the CIS will flash, a dissection of the closed society of the subway, which embodies all the modern “-isms” in the most grotesque and frightening form, and many other aspects . Unfortunately, the game is very greedy for PC resources and has not gotten rid of technical problems, characteristic of the engine of the first part.

Comfortable FPS at medium graphics settings is provided by all tested cards, including the GT 1030. The overall balance of power does not change: the RX 550 is slightly behind the GTX 750 Ti, which in turn is inferior to the RX 460. Taking into account the more successful overclocking of the RX 460 and less good luck with the GTX 750 Ti, the gap would be even greater.

Rise of the Tomb Raider- an attempt to return the series to its roots after the conventionally realistic first part of the 2013 model. Fantastic artifacts, liberties with history and geography and even the physical capabilities of the heroine, vigorous gameplay and adventures in bright scenery are included. The only thing missing is an endless supply of ammunition.

It's still possible to play at medium settings with the GT 1030, but you can't ignore the obvious lag behind the RX 550 and GTX 750 Ti. As, in fact, the leadership of the RX 460.

Warface- a multiplayer shooter with medals and paid items from the creators of the Crysis series and the first FarCry, running on the Cry Engine 3, but much less demanding on the graphics subsystem than the main title.

As in the case of Metro: Last Light, all tested cards cope with the game, and even the GT 1030 provides comfortable FPS. On the other hand, the RX 550 handles the game significantly better, and the GTX 750 Ti and RX 460 do even better. At the same time, it is surprising that these cards show the same FPS with fundamentally different “power supply”.

War Thunder- a project that has long been awarded the status of the spiritual successor to World of Tanks. Initially, the game was perceived as “WoT with airplanes,” but in the end it turned into an original and distinctive product that deserves attention without any references to the previously released project.

In WarThunder, the tradition of “sameness” of results only gets worse. Due to not the most efficient overclocking, the RX 460 does not show the same advantage over the GTX 750 Ti that it had in nominal mode, and the GT 1030 and RX 550 are even on par.

Watch Dogs 2- a continuation of the relatively new Ubisoft franchise, correcting the shortcomings of the first part, and captivating the target audience with the theme of the struggle “not like everyone else” against “everyone who is like that.” In graphical terms, as well as in gameplay, the game is noticeably superior to its predecessor, and the system requirements are not at all comparable.

Watch Dogs 2, like other AAA games, sets a difficult task for all test participants, except for the RX 460. It is curious that the three other cards here are in a fairly tight group: the GTX 750 Ti and RX 550 are vying for second place, the GT 1030 settled in third, providing minimally comfortable FPS, albeit only after overclocking.

World of Tanks- a game that has done more to develop patriotism and interest in native and world history than all the attempts of the current education system. Perhaps one of the first MMO projects that was able to satisfy the needs of users tired of the adventures of long-eared and green-skinned people. At the same time, it is highly popular among history buffs, reenactors, modelers and others involved, which only benefits the player community, reducing the percentage of schoolchildren and simply interesting characters. It is distinguished by historical accuracy, a realistic damage model, a rich fleet of equipment, but the gameplay has a fairly low barrier to entry. The first versions of the game had modest system requirements, but as a result of recent innovations, the load on PC hardware has increased many times over.

The situation is repeated again with Metro: Last Light, except that the gap between the RX 460 and GTX 750 Ti is much larger. You can play on all tested cards, including the GT 1030. The RX 550 copes noticeably better, the GTX 750 Ti even a little better.

A great and terrible MMORPG that has been around, perhaps, longer than some game studios have been working. Graphics engine World of Warcraft has always been distinguished by excellent optimization: for example, the author of this article, during patches 1.3, managed to play the subject on a GeForce 2 MX 400 installed on his work computer. The video card was already an antiquity back then, but nevertheless it played the game at a resolution of 800 x 600 pixels. A similar situation is observed now: with proper selection of settings, you can play passably even on Intel HD Graphics 4000, but in order to set the parameters to the maximum and get comfortable FPS in any possible scenes, you will need almost top-end hardware.

The WoW engine once again confirms its reputation: you can play on all tested video cards, only the GT 1030 falls out of the general peloton, but even it produces a comfortable FPS, sufficient for PvE gameplay and completing quests. On the other hand, the RX 550 turns out to be noticeably faster and has a greater “safety margin” for more difficult scenes than walking around the main city.

conclusions

As can be seen from the graphs, there is still FPS “abroad” of miners’ interests. You can play on budget video cards, and these will not only be online games with weak graphics, but also quite AAA-class titles. Yes, in the latter only medium graphics settings will be available and the FPS will not impress with sky-high numbers, but you can still play.

The only question is which video card to prefer for this.

GeForce GT 1030- obviously the weakest card in testing. The author will not say that this video card is bad - no, it makes sense as a budget solution for a multimedia PC, especially if you choose options with passive cooling at a minimal price. This card will cope with the playback of any HD content, and will even allow you to play online games or good old titles from time to time with graphics that are not the most difficult by today’s standards.

But call it GT 1030 gaming video card entry level - sorry. It doesn’t provide playable FPS in every multiplayer game (hello, DOTA 2), let alone heavy new games!

Radeon RX 550- this is the mark from which they begin game cards. Yes, you will have to make a lot of compromises with it, choosing the appropriate settings, but after that it will also handle heavy AAA-class titles, not like all sorts of online RPGs, shooters and MOBAs.

The RX 550 has just one problem: its cost. If we consider budget options like Gigabyte D5, Powercolor Red Dragon or Sapphire Pulse, then everything is fair, these cards are worth the money. But buying more expensive versions, the price of which is comparable to the cost of the RX 560 2gb, is a solution worthy of the Chamber of Weights and Measures. Look how much slower this card is than the RX 460, which also doesn’t really overclock, and think what the difference will be with the RX 560, especially if you overclock it to 1400 MHz on the chip.

GTX 750 Ti- an anachronism that is still on sale. This card is capable of providing acceptable performance, but as always, it is let down by the price and positioning. Buying this card at the price of the RX 460 2gb is obviously pointless, just like buying any product with worse characteristics for the same price. And the RX 550, in most cases, provides the same performance at a lower price and lower power consumption.

The only justifiable reason to buy an old generation card is to have a VGA monitor. But even in this case, you should choose not the GTX 750 Ti, but the GTX 950, which also sometimes comes on sale, costs a little more, and will be much faster.

RX 460 2gb- the obvious leader of today's test, showing a simply gigantic lead over the RX 550 and GTX 750 Ti in most games and benchmarks. However, the author is in no hurry to recommend it for purchase.

The RX 460 is also let down by its price, and this is much more obvious than in the case of the RX 550. Even the option from Powercolor discussed in this article costs 7,199 rubles: it seems a little, it seems barely more than the most expensive RX 550... that's just from the RX 560 with the same two gigabytes, 1024 universal processors and, most importantly, an additional power connector, it is separated by only 800 rubles. There's something to think about, isn't there?

Nvidia competes not only with other manufacturers, but also creates rivals among its models. Thus, video cards are often released that have the same design, processor and frequency values, but differ in minor indicators. Such an example was a couple of GTX 750 Ti vs GTX 750. Both cards, indeed, turned out to be similar, both found their competitors among the options from AMD.

Many users to this day cannot understand why the company introduced these models, because they could have left one version of the video card. One way or another, they both entered the market, gained the trust of users and found their buyer.

New chip

Mainly, the release of this line marked a change in card architecture. This is how the processor from Maxwell became known to the world. The GM107 fits in both models, but in different versions. It has become compact and, accordingly, more budget-friendly than other cards. But there are a large number of transistors in it - about 1.87 billion.

The number of computing units has also increased to 640 cores and 40 texture elements. For example, in the GK107 version there were 384 and 32 pieces, respectively. The number of ROPs remained unchanged - 16, and the memory bus remained its previous capacity - 128 bits. At first glance, these numbers won’t mean much to anyone, but you need to take into account that the power consumption of both chips is almost the same - 60 versus 64 W. This is the purpose of the new chip - increasing productivity.

Difference

For GTX 750 vs GTX 750 Ti, the comparison can immediately start with the technical characteristics. The fact is that the reference version of the cards is difficult to purchase at the moment. Most often, modified versions are found on the market. Therefore, there is no need to consider the packaging. But we can say a few words regarding the initial readings of both video cards.

As mentioned earlier, both cards are based on the GM107 chip. It is made using a 28 nm process technology. Transistors have increased in total compared to previous chip models - up to 1.87 billion. The core clock frequency of both models is identical and equal to 1.02 GHz. Its maximum value reaches 1.08 GHz. The memory runs at 1.35 GHz.

A type of GDDR5 video memory. The volume for the GTX 750 is 1 GB, and for the older model - 2 GB. This is where the first difference between the models lies. The memory bus has a width of 128 bits. Thus, the memory bandwidth of both cards is up to 86 GB/sec.

Both accelerators support the latest version of DirectX, which at that time had just appeared in new models. Rasterization blocks have been reduced to 16 pieces. And then again different indicators. The lower card has 512 stream processors, and the older card has 640. There are also fewer texture units in the GTX 750 - 32, and 40 in the GTX 750 Ti. The difference is also present in thermal package. The younger version consumes 55 W, and the older version consumes 60 W.

Appearance

But the external appearance in the comparison of GTX 750 Ti vs GTX 750 is identical. The reference sample of both cards looks minimalistic. Its dimensions are compact. Before us is a single-slot board that does not require additional power, since the chip consumes a small amount of energy.

The length of the board was almost 15 centimeters, which is very short compared to previous models. Nvidia chose a small cooler. It does not occupy the entire surface of the board, but is located above the chip itself. On the left side of the cooler there is a power subsystem, which consists of two phases. The fan also turned out to be small. The diameter of the blades is only 60 mm. The radiator, as usual, is made of aluminum. The cooling system is simple: a more powerful cooler is not needed for a heat dissipation of 55-60 W.

Versions from Asus

Since the reference version of the 750 model is difficult to find, comparisons of GeForce GTX 750 vs GTX 750 Ti were often carried out on models from Asus. The manufacturer presents two externally completely different modifications.

The GTX750-PHOC is practically no different from the reference version. At first glance, all that is visible is a redesigned cooling system. The plastic casing covers more than half of the board's area; it contains a fan, and underneath it is a radiator. It turned out to be “solar” in shape. It has a dense core from which curved plates radiate.

The interface panel of this version is not rich: it has one D-Sub, DVI and HDMI connector each. The printed circuit board itself has not changed. It has two power phases.

But the GeForce GTX 750 Ti OC Edition looks expensive. Despite the fact that the board itself also resembles the reference version, it is covered by a large cooling system on top. The cooler consists of a casing containing two fans. Beneath them lies a radiator that has neither a copper base nor heat sink pipes. The system as a whole is similar to DirectCU technology.

A pleasant surprise was, in addition to the standard connectors, the presence of a separate VGA video output.

Overclocking potential

Technical GTX specs We have already reviewed 750 Ti vs GTX 750. But the models from Asus are somewhat different from the reference version. The older modification has a factory overclock, in which the core frequency is increased to 1072 MHz, with maximum values ​​reaching 1150 MHz. The video memory remains the same and operates at 5400 MHz.

The new product from Asus GTX 750 was also factory overclocked. The core frequency increased to 1059 MHz, with a maximum of 1137 MHz. The memory operates at 5012 MHz. The acceleration did not disappoint. The frequencies of the younger model increased to 1194 MHz, and the memory became faster and equal to 6010 MHz. The older modification showed higher performance. The core frequency has increased to 1207 MHz, and the memory has increased to 6300 MHz.

A win-win

The company had to optimize chips based on the Kepler architecture for a long time. When the new Maxwell version appeared, things didn't go so smoothly again. Nevertheless, the chips of this architecture are truly compact and efficient. In addition, it is for them that budget models have been developed.

This time Nvidia did not reveal all the secrets of how it was possible to achieve high performance with this technical process. It became known that the manufacturer has worked on the balance of components, computing units, etc. The increased performance can now be explained by an increase in the number of stream processors.

In general, the first generation of Maxwell chips became somehow classified. The company did not extend much to their account. It is known that the goal is distribution in the professional segment, although tabletop cards will also appear. Most likely, such a secret is connected precisely with the analysis of technologies and technical processes. This often happened at Nvidia: first a card with a new architecture appeared, and only then began studying the technical process.

The battle GTX 750 Ti vs GTX 750 did not end with the victory of one model. Both are within reach price segment 4-6 thousand rubles. It is for this price that the user most often looks for an accelerator. Both cards have become competitive and worthy opponents: Radeon R7 260X for a strong modification and Radeon R7 260 for a weak one.

Still, if we consider the competition between the GTX 750 Ti and the GTX 750, it is clear that the first adapter is the winner. But this was what was originally intended. It’s not for nothing that the video card received a larger amount of memory and a change in the number of blocks. Overclocking potential is another matter. Here it is the older version that really shows itself better. Although the youngest is not much inferior to her.

Other opponents

Often the older version is pitted against other rivals. This is how the comparison of GTX 660 vs GTX 750 Ti appears. Despite the fact that the latter option is a newer version, users still recommend purchasing the 660th model. The thing is that the main disadvantage of the new product is considered to be an incompletely studied architecture, the absence of SLI and a 128-bit bus. Thus, the GTX 660 shows better performance. Although if you search among the modifications, you may find a more powerful version of the GTX 750 Ti there. Also, gaming tests show a clear superiority of the old card over the new one.

Approximate situation regarding the comparison of GTX 460 vs GTX 750 Ti. The first model is significantly superior to the new product, since it has 256 bits of capacity, thereby the throughput has become one and a half times greater. Also, in all tests, the GTX 460 variant shows higher performance, which, by the way, is evidenced by the difference in the cost of both types. The GTX 750 Ti received a price of 3-4 thousand rubles less.

But in the battle of GTX 650 Ti vs GTX 750 Ti, the new product has a clear advantage. In terms of performance, it has become 20-25% more powerful. Although if we talk about the variation of the GTX 650 Ti Boost, then, on the contrary, we can see a deterioration in the performance of the new product by 15-20%. Again the problem is throughput and number of blocks.

Rival from AMD

Now the main competitor of the older new product, as mentioned earlier, is the Radeon R7 260X. But over time, a model appeared from AMD that was able to outdo this new product. Thus, in the battle between RX 460 vs GTX 750 Ti, the Radeon version clearly won. This can be seen in a comparison of some of the main indicators of both video cards.

The core frequency of the RX 460 in the reference version is 1090 MHz, but the product from Nvidia is only 1020 MHz. Memory speed - 7000 versus 5400 MHz. The GTX 750 Ti began to consume less power - 60 versus 75 V. An important indicator that affects performance is memory bandwidth. The difference between the two models RX 460 vs GTX 750 Ti is impressive - 112 GB/s versus 87 GB/s, respectively. The processor is also more powerful in the "viduha" from AMD. This is also evidenced by the number of transistors - 3000 million versus 1870 million. By the way, the accelerator from Radeon began to support a newer version of DirectX, but this is most likely due to the fact that it was released later than the GTX 750 Ti. The Viduha Radeon RX 460 is significantly superior to the new product in terms of performance.

conclusions

Returning to the GTX 750 and GTX 750 Ti, I must say that both cards have become quite good in their price segment. They are representatives of budget gaming peripherals. They can handle many modern games, perhaps not as perfectly as newer versions. But their main advantage is still considered to be cost.

Preface

Almost two years ago – in March 2012 – NVIDIA introduced the entire gaming world to its revolutionary “Kepler” graphics architecture and the first video card based on it – GeForce GTX 680. At that time, NVIDIA acted as a catch-up, pursuing the flagship AMD Radeon HD 7970 of its eternal competitor. And it must be said that the release of the GeForce GTX 680 was a complete success, restoring the status quo in the upper price segment of video cards. Later, more modest in performance and cost models of video cards based on NVIDIA GPUs were released.

A special feature of today’s announcement of the GeForce GTX 750 Ti and GTX 750 is the fact that for the first time NVIDIA is launching the new “Maxwell” graphics architecture with lower-end models, and not with Hi-End solutions, as was the case before. And as it turned out, there are reasons for this.

Let's try to understand them, get acquainted with the first GeForce GTX 750 Ti video card and compare its performance with its competitors.

1. New architecture Maxwell 1.0 – a course for energy efficiency

So, today NVIDIA officially unveils the first GPU with the new “Maxwell” architecture. The manufacturer especially emphasizes that this is just the first generation of chips based on this architecture. It is produced using the already well-developed 28-nm technological process, and according to preliminary data it will consist of two chips – GM107 and GM108. Codenames ending in 7 and 8 are traditionally assigned to low-end GPUs in the line, which typically give birth to several inexpensive video cards at once. The new products presented today - GeForce GTX 750 and GeForce GTX 750 Ti - are based on the GM107 chip, which is the successor to the GK107.



The second generation of graphics processors based on the Maxwell architecture will be presented in the second half of this year, and, presumably, should consist of three chips - GM200, GM204 and GM206. They will be produced in a new way, 20 nm technological process. It is also expected that the second-generation Maxwell architecture will undergo additional changes, but, as in the first generation, the main emphasis will be on improving energy efficiency.

You often hear the opinion, “Who needs this energy efficiency of yours? Let’s better make a high-performance chip!” However, the reality is that energy efficiency is the key to high productivity. The saved watts can be exchanged for increasing frequencies or increasing the number of actuators. For example, if, with equal performance, chip A consumes 300 watts, and chip N consumes 150 watts, then the manufacturer of chip N can release a version with a larger number of actuators, increase frequencies, and significantly outperform chip A, which has already reached the limits of reasonable thermal and energy package, and has no reserves for increasing productivity. In addition, lower power consumption with equal performance allows us to release a video card with better consumer characteristics. You can use a less powerful power supply and cooling system, which will have a positive effect on reliability and noise levels.

The fight for energy efficiency has been going on invisibly for many years, but in recent years, with the slowdown in the pace of transition to new standards in semiconductor production, it has begun to come to the fore. When there is no hope for a quick transition to a more “fine” technological process, we have to look for reserves for increasing the performance of our top solutions by increasing frequencies, which is impossible without architecture optimizations aimed at increasing energy efficiency. And the transition to a new technological process is no longer a panacea, for the simple reason that the packaging density of transistors is growing faster than the specific consumption per transistor is decreasing. Consequently, if the architecture is not improved in energy efficiency, the heat dissipation per square millimeter of the chip will increase, and will very quickly become a factor limiting the growth of clock speeds, and in turn, performance.

NVIDIA says Maxwell was designed to achieve extreme performance per watt. If we take into account that Maxwell was designed primarily for the future 20 nm process technology, then there is nothing surprising in such statements. Two years ago, Intel introduced its processors Ivy Bridge, using 22 nm process technology. The key difference between these production standards was the use of “3D tri-gate” technology, that is, the placement of transistors not in a plane, but in volume. Transistors have become conditionally three-dimensional. TSMC is planning to make a similar transition to 3D transistors on its 20nm process technology. This will significantly increase the packaging density of transistors. But along with it, the heat release per area will also increase. Therefore, if at the current 28 nm technological process, Maxwell-based solutions demonstrate outstanding efficiency and low heating, then with the transition to 20 nm and the release of more productive chips using many times more transistors, these indicators may become simply ordinary.

After it became clear why NVIDIA tried to significantly increase the energy efficiency of the new architecture - Maxwell 1.0, it’s time to move on to consider how it succeeded. First, let's look at the general circuit of the GM107 chip:



At first glance, there are no fundamental changes relative to the previous Kepler architecture. The diagram shows the already familiar GigaThread Engine, second-level cache, raster operations units and a GPC containing five SMMs (Maxwell SM). But the devil usually hides in the details. Since the main emphasis during the development of Maxwell was on a significant increase in energy efficiency, all changes, one way or another, are subordinated to this goal. The second level cache, compared to its predecessor GK107, has grown 8 times, from 256 KB to 2 MB. This made it possible to significantly increase the amount of data cached for both reading and writing, which means using the memory controller less often, and also increasing the likelihood of finding the necessary data in the cache. This is how a solution aimed at energy efficiency, which is disliked by many fans of overclocking, allows you to increase not only this same energy efficiency, but also the overall level of performance of the chip.

The main differences between Maxwell and Kepler lie deeper, in the new SM, now renamed from SMX to SMM. GM107 has one GPC, already familiar to low-end solutions. But the number of SMMs in it reaches five pieces. In the case of Kepler, recall that there were no more than three SMX per GPC. This solution allows you to save on control logic, which, in terms of one SM, now requires less. But the main changes, as mentioned above, affected SMM themselves.


“Divide and conquer” - it was under this motto that, apparently, the process of designing the SMM concept took place. Let's list the main differences from Kepler:

The cache system has been redesigned. In Kepler, the 64 KB block was divided between the first level cache and shared memory, and the texture cache was a separate array. In Maxwell, shared memory completely and solely occupies 64 KB of memory, but the texture cache and the first level cache compete with each other for the resources of one memory array.
Unlike Kepler, where virtually all resources in SMX were shared, SMM is divided into several groups of devices, with control logic tightly tied to an array of actuators. This made it possible to significantly save on internal connections, simplify the control logic, and reduce energy consumption.
Like Kepler, one SMM contains 4 Warp Schedulers, but instead of managing a single array of 192 SP, they now manage four separate arrays of 32 SP per Warp Scheduler. Consequently, the number of SPs in one SMM was reduced to 128 pieces. But, thanks to optimizations of the control logic and the SPs themselves, it was possible to achieve an increase in the efficiency of each SP by approximately 35%. Thus, the performance of one SMM is only slightly inferior to the performance of one SMX, but it consumes almost half as much energy and consists of fewer transistors. The register file has the same aggregate size as in Kepler - 65536 32-bit entries, but is also divided into 4 blocks of 16384 entries.




Each Warp Scheduler now has an instruction buffer, which is an intermediate link between the instruction cache and the Warp Scheduler itself. This also improves productivity while reducing overall energy consumption.
The array of texture blocks has been halved compared to Kepler, from 16 pieces in SMX to 8 pieces in SMM. In addition, these 8 texture blocks are divided into two quads, each quad (4 pieces) has its own texture cache and first-level cache, which, recall, are now combined in one memory array. One quad of texture blocks is divided between two “processing blocks”, consisting of 32 SP.

It is clearly noticeable that despite the external similarity to Kepler, Maxwell has many differences from the previous architecture. A large number of blocks have undergone various changes, the number of on-chip connections has been reduced, large blocks have been split into smaller ones, a rigid hierarchy has been built and various actuators have been linked to resources. According to NVIDIA, this made it possible to double the energy efficiency of the new architecture and increase the efficiency of actuators. In fact, the GM107, represented by the older solution GeForce GTX 750 Ti, having fewer actuators and lower peak theoretical performance than the GeForce GTX 650 Ti, surpasses it in real performance. At the same time, the new product consumes almost half as much as its predecessor, 60 watts, versus 110 watts. And if so, then this is a really great result! After all, Kepler at one time demonstrated a very high level of energy efficiency, and doubling this figure within one technological process was a very difficult task.

Now is the time to move from theory to practice, and test NVIDIA's claims in practice.

2. Review of the NVIDIA GeForce GTX 750 Ti video card

technical specifications and recommended cost

Technical characteristics and recommended prices of NVIDIA GeForce GTX 750 Ti and GTX 750 video cards are shown in the table in comparison with the reference versions NVIDIA GeForce GTX 650 Ti BOOST, NVIDIA GeForce GTX 650 Ti and AMD Radeon R7 260X:




PCB design and features

The reference version of the NVIDIA GeForce GTX 750 Ti video card is only 145 mm long, and in this respect it is no different from its predecessor GeForce GTX 650 Ti:


But the thickness of the video card is slightly smaller and is 34 mm:


There are also no differences in terms of outputs: DVI-I and DVI-D (both Dual-Link) and mini HDMI versions 1.4a:


The six-pin additional power connector disappeared from the printed circuit board, although the contact pad for it remained:


This is not surprising, because the declared power of the NVIDIA GeForce GTX 750 Ti is only 60 watts versus the previous 110 watts for the GTX 650 Ti, which means that such a video card will have enough power supplied via the PCI-Express connector of the motherboard (75 watts). The recommended power supply for a system with one such video card is only 300 watts. Operation in SLI modes is not supported for the NVIDIA GeForce GTX 750 Ti.

NVIDIA engineers did not “philosophize” and took the printed circuit board of the GeForce GTX 650 Ti video card for the GTX 750 Ti:


Let us recall that a three-phase power system is used here, two of which go to the graphics processor, and one to the memory and power circuits:


The GM107 “Maxwell” GPU die has an area of ​​only 148 sq. mm, and its substrate does not have a protective frame. However, production video cards will probably be equipped with this extremely useful detail, as was (and still is) with the GTX 650 Ti:


The microcircuit belongs to revision A2 and, judging by the markings, was released in the 49th week of 2013 (early December). The base frequency of the GPU in 3D mode is 1020 MHz, and in boost mode it can reach 1085 MHz. However, according to monitoring data, the GPU frequency reached 1163 MHz. The voltage turned out to be 1.168 V, but it is likely that serial products can operate at other voltages. We add that when switching to 2D mode, the GPU frequency drops to 135 MHz instead of the previous 324 MHz on the GeForce GTX 650 Ti, and the voltage drops to 0.95 V.

The ASIC quality of our GeForce GTX 750 Ti processor turned out to be 74.0%:


NVIDIA GeForce GTX 750 Ti is equipped with two gigabytes of GDDR5 video memory in FCBGA packaging. manufactured by SK Hynix (marking H5GC4H24MFR-T2C:


The theoretical effective frequency of such chips in 3D mode is 5000 MHz at a voltage of 1.35 V or 6000 MHz at a voltage of 1.5 V. In the case of the GeForce GTX 750 Ti, most likely, the second option is used, since the frequency is set at around 5400 MHz, and the theoretical bandwidth with a 128-bit memory bus width is 86.4 GB/sec. That is, in terms of video memory, the GeForce GTX 750 Ti has no differences from the GeForce GTX 650 Ti.

The latest version of the GPU-Z utility available at the time of writing this article is already familiar with the characteristics of the GeForce GTX 750 Ti:


And it is even capable of reading the BIOS of this video card, which we traditionally attached to the review.

cooling system - efficiency and noise level

The reference version of the NVIDIA GeForce GTX 750 Ti is equipped with an extremely simple cooling system, consisting of a small aluminum radiator and a plastic fan installed above it:


The fan is secured with four screws directly to the radiator and thanks to this it can be removed quite easily:


It turned out to be a 60 mm model (actual diameter 55 mm) on a ball bearing FA06010H12BNA from Cooler Master:


No PWM control, of course. The speed is changed only by voltage. There is no speed monitoring.

To check the temperature conditions of the NVIDIA GeForce GTX 750 Ti video card as a load, we used five test cycles of the very resource-intensive game Aliens vs. Predator (2010) with maximum graphics quality in a resolution of 2560x1440 pixels with anisotropic filtering at 16x level, but without activating MSAA anti-aliasing:



To monitor temperatures and all other parameters, MSI Afterburner version 3.0.0 beta 18 and GPU-Z utility version 0.7.7 were used. All tests were carried out in a closed system unit case, the configuration of which you can see in the next section of the article, at an average room temperature of about 25 degrees Celsius.

Despite the simplicity of the cooler of the reference NVIDIA GeForce GTX 750 Ti, the heat dissipation level of this video card is so modest that even an aluminum blank with a small fan was enough to not only keep the GPU temperature within 70 degrees Celsius, but also work quite comfortably in terms of noise level:



Auto mode


With the maximum fan speed manually set, the GPU temperature drops by 8 degrees Celsius to a final 60 degrees Celsius:



Maximum speed


As for the noise level, it is really low. In automatic adjustment mode, the NVIDIA GeForce GTX 750 Ti cooler barely stands out against the background of a quiet system unit.

overclocking potential

Frankly speaking, due to serious time constraints in preparing material about the NVIDIA GeForce GTX 750 Ti, we did not have time to fully study its overclocking potential. However, without increasing the core voltage and cranking up the cooler fan speed to the maximum, the GPU frequency was increased by 135 MHz (+13.2%) and the video memory frequency by 1260 effective megahertz (+23.3%):


The final frequencies of the video card after overclocking were 1155-1220/6660 MHz:


At the same time, according to monitoring data, the GPU frequency in boost mode reached 1300 MHz:



The GPU temperature of the overclocked video card increased by 1 degree Celsius at peak load, and the maximum fan power increased from 49 to 50%. In our opinion, for the first press sample, the NVIDIA GeForce GTX 750 Ti demonstrated very good overclocking.

gallery of serial models of GeForce GTX 750 Ti video cards

Along with the announcement of the GeForce GTX 750 Ti, almost all manufacturers presented their original models. We hope to gradually introduce you to many of them, and today we will present photos of some of them:









GeForce Experience

The new GeForce GTX 750 Ti, of course, is supported by the NVIDIA GeForce Experience utility suite, which was recently updated to version 1.8.2:



GeForce Experience is capable of automatically selecting optimal graphics settings in games, in accordance with the system configuration. At the same time, the utility will definitely remind you to exit new version drivers:



In addition, you can also find there brief information about the system...



...and configure the necessary parameters:



However, the most interesting, in our opinion, is the ability to record gameplay ShadowPlay:



Using the built-in GeForce GTX 600 and GTX 700 GPU hardware NVENC H.264 encoder, ShadowPlay is able to save up to 20 minutes of gameplay at 1920x1080 pixels at 60 FPS to a hard drive buffer in MP4 format, which can then be edited or publish online.

3. Test configuration, tools and testing methodology

Video card performance testing was carried out on the following system configuration:

Motherboard: Intel Siler DX79SR (Intel X79 Express, LGA 2011, BIOS 0590 dated 07/17/2013);
CPU: Intel Core i7-3970X Extreme Edition 3.5/4.0 GHz(Sandy Bridge-E, C2, 1.1 V, 6x256 KB L2, 15 MB L3);
CPU cooling system: Phanteks PH-TC14PE (2xCorsair AF140, 900 rpm);
Thermal interface: ARCTIC MX-4;
Video cards:

HIS Radeon R9 270 iPower IceQ X² Boost Clock 2GB 952/5600 MHz;
MSI GeForce GTX 650 Ti BOOST Twin Frozr III 2 GB 1033-1098/6008 MHz;
NVIDIA GeForce GTX 750 Ti 2 GB 1020-1085/5400 MHz;
AMD Radeon R7 260X 2 GB 1100/6500 MHz;
ASUS Radeon HD 7790 DirectCU II 1 GB 1075/6400 MHz;

RAM: DDR3 4x8 GB G.SKILL TridentX F3-2133C9Q-32GTX(XMP 2133 MHz, 9-11-11-31, 1.6 V);
System disk: SSD 256 GB Crucial m4 (SATA-III, CT256M4SSD2, BIOS v0009);
Disk for programs and games: Western Digital VelociRaptor (SATA-II, 300 GB, 10000 rpm, 16 MB, NCQ) in a Scythe Quiet Drive 3.5" box;
Archive disk: Samsung Ecogreen F4 HD204UI (SATA-II, 2 TB, 5400 rpm, 32 MB, NCQ);
Sound card: Auzen X-Fi HomeTheater HD;
Case: Antec Twelve Hundred (front wall - three Noiseblocker NB-Multiframe S-Series MF12-S2 at 1020 rpm; back – two Noiseblocker NB-BlackSilentPRO PL-1 at 1020 rpm; top – standard 200 mm fan at 400 rpm);
Control and monitoring panel: Zalman ZM-MFC3;
Power supply: Corsair AX1200i (1200 W), 120 mm fan;
Monitor: 27" Samsung S27A850D (DVI-I, 2560x1440, 60 Hz).

Even with its recommended price of 5,490 rubles, for which it most likely will not be sold at first, the new GeForce GTX 750 Ti is in a very serious campaign. Among its competitors is not the GeForce GTX 650 Ti, which it is being released instead, but the more productive GeForce GTX 650 Ti BOOST, costing about 5,000 rubles. We included it in testing, along with the even more powerful and expensive Radeon R9 270, which we will consider as the next step in performance. These video cards are represented by products from HIS and MSI at their nominal frequencies:




In addition, the testing includes a slightly cheaper Radeon R7 260X 2 GB in the reference version, and an ASUS Radeon HD 7790 DirectCU II 1 GB at slightly higher factory frequencies:




Thus, five video cards will take part in today’s testing, and the heroine of the review was tested not only at nominal frequencies, but also at the overclocking we achieved.

To reduce the dependence of video card performance on platform speed, 32 nm six core processor with a multiplier of 48, a reference frequency of 100 MHz and the Load-Line Calibration function activated, it was overclocked to 4.8 GHz when the voltage in the motherboard BIOS was increased to 1.38 V:



Hyper-Threading technology is activated. At the same time, 32 GB of RAM operated at a frequency of 2.133 GHz with timings 9-11-11-20_CR1 at a voltage of 1.6125 V.

Testing, which began on February 14, 2014, was conducted under the direction of operating system Microsoft Windows 7 Ultimate x64 SP1 with all critical updates as of the specified date and installation of the following drivers:

motherboard chipset Intel Chipset Drivers – 9.4.4.1006 WHQL dated 09/21/2013;
DirectX End-User Runtimes libraries, release date: November 30, 2010;
Video card drivers for AMD GPUs – AMD Catalyst 14.1 Beta 1.6 (13.350.1005.0) dated 12/18/2013;
video card drivers for graphics NVIDIA processors– GeForce 334.69 Beta from 01/19/2014.

Given the modest performance of the video cards tested today, they were only tested at a resolution of 1920x1080 pixels. For the tests, two graphics quality modes were used: “Quality + AF16x” – the default texture quality in the drivers with 16x level anisotropic filtering enabled, and “Quality + AF16x + MSAA 4x” with 16x level anisotropic filtering enabled and 4x level full-screen anti-aliasing. In some games, due to the specifics of game engines, other anti-aliasing algorithms were used, which will be indicated further in the methodology and in the diagrams. Anisotropic filtering and full-screen anti-aliasing were enabled directly in the game settings. If these settings were not available in games, then the parameters were changed in the control panel of the Catalyst or GeForce drivers. Vertical synchronization was also forcibly disabled there. Apart from the above, no additional changes were made to the driver settings.

The video cards were tested in two graphics tests and twelve games, updated to the latest versions as of the start date of preparation of the material:

3DMark (2013)(DirectX 9/11) – version 1.2.250.0, testing in the “Cloud Gate”, “Fire Strike” and “Fire Strike Extreme” scenes;
Unigine Valley Bench(DirectX 11) – version 1.0, maximum quality settings, AF16x and/or MSAA 4x, resolution 1920x1080;
(DirectX 11) – version 1.1.0, built-in test (Battle of Sekigahara) at maximum graphics quality settings and used in one of the MSAA 8x modes;
Sniper Elite V2 Benchmark(DirectX 11) – version 1.05, used Adrenaline Sniper Elite V2 Benchmark Tool v1.0.0.2 BETA maximum graphics quality settings (“Ultra”), Advanced Shadows: HIGH, Ambient Occlusion: ON, Stereo 3D: OFF, Supersampling: OFF, double sequential test run;
Sleeping Dogs(DirectX 11) – version 1.5, used Adrenaline Action Benchmark Tool v1.0.2.1, maximum graphics quality settings for all points, Hi-Res Textures pack installed, FPS Limiter and V-Sync disabled, double sequential test run with total anti-aliasing at the “Normal” level and at the “High” level;
Hitman: Absolution(DirectX 11) – version 1.0.447.0, built-in test with graphics quality settings at “Ultra”, tessellation, FXAA and global illumination enabled.
Crysis 3(DirectX 11) – version 1.2.0.1000, all graphics quality settings to maximum, blur level – medium, glare on, modes with FXAA and MSAA4x anti-aliasing, double sequential pass of a scripted scene from the beginning of the “Swamp” mission lasting 110 seconds;
Tomb Raider (2013)(DirectX 11) – version 1.1.748.0, used Adrenaline Action Benchmark Tool, quality settings at “Ultra” level, V-Sync disabled, modes with FXAA and 2xSSAA anti-aliasing, TressFX technology activated, double sequential pass of the test built into the game;
BioShock Infinite(DirectX 11) – version 1.1.24.21018, used Adrenaline Action Benchmark Tool with “High” and “Ultra” quality settings, double run of the test built into the game;
Metro: Last Light(DirectX 11) – version 1.0.0.15, the test built into the game was used, graphics quality and tessellation settings were set to “High”, Advanced PhysX technology was turned off, tests with and without SSAA anti-aliasing, double sequential pass of the “D6” scene.
GRID 2(DirectX 11) – version 1.0.85.8679, the test built into the game was used, graphics quality settings were set to the maximum level in all positions, tests with and without MSAA4x anti-aliasing, eight cars on the Chicago track;
Company of Heroes 2(DirectX 11) – version 3.0.0.12358, double sequential run of the test built into the game with maximum graphics quality and physical effects settings;
Batman: Arkham Origins(DirectX 11) – version 1.0 update 8, quality settings at “Ultra” level, V-Sync disabled, all effects activated, all “DX11 Enhanced” functions enabled, Hardware Accelerated PhysX = Normal, double sequential pass of the test built into the game;
Battlefield 4(DirectX 11) – version 1.4, all graphics quality settings are set to “Ultra”, double sequential playthrough of a scripted scene from the beginning of the mission “TASHGAR” lasting 110 seconds;

As you can see, in some games the maximum graphics quality settings were not used in order to maintain FPS at a level at least acceptable for the game.

If games implemented the ability to record a minimum number of frames per second, then this was also reflected in the diagrams. Each test was carried out twice; the best of the two values ​​obtained was taken as the final result, but only if the difference between them did not exceed 1%. If the deviations of the test runs exceeded 1%, then the testing was repeated at least once more to obtain reliable result.

4. Performance test results and their analysis

In the diagrams, the results of all NVIDIA GeForce video cards are shown in light green, and the results of AMD Radeon are shown in the usual red color. Video cards are arranged from top to bottom in descending order of their recommended cost.

3DMark (2013)


In the first semi-synthetic test, the new NVIDIA GeForce GTX 750 Ti performs quite confidently. It turns out to be slightly faster than the AMD Radeon R7 260X, slightly slower than the MSI GeForce GTX 650 Ti BOOST, and easily outperforms the latter when overclocked. The only thing left ahead is the more expensive Radeon R9 270 in the HIS version.

Unigine Valley Bench

The situation is different in the Unigine Valley test:



While ahead of both video cards based on AMD GPUs, the new NVIDIA GeForce GTX 750 Ti is noticeably behind the GeForce GTX 650 Ti BOOST, and, moreover, cannot catch up with it when overclocked. In turn, the MSI video card in this test demonstrates the same performance as the Radeon R9 270 from HIS.

Total War: SHOGUN 2 – Fall of the Samurai

Total War: SHOGUN 2 – Fall of the Samurai demonstrated an almost common picture for all testing:



The NVIDIA GeForce GTX 750 Ti at nominal frequencies is slightly ahead of the AMD Radeon R7 260X, and is 8-11% behind the GeForce GTX 650 Ti BOOST in the MSI version. However, by overclocking the new video card to frequencies of 1155/6660 MHz, performance can be raised to the level of the original GeForce GTX 650 Ti BOOST with factory overclocking. Not bad.

Sniper Elite V2 Benchmark



The advantage of the NVIDIA GeForce GTX 750 Ti over the Radeon R7 260X in Sniper Elite V2 is higher than in the previous game and in the mode without anti-aliasing activation reaches 14%. At the same time, with the GeForce GTX 650 Ti BOOST the difference is large, but not in favor of the NVIDIA GeForce GTX 750 Ti. Moreover, in this game, overclocking the new product does not help it get ahead of the BOOST version of the 650 Ti.

Sleeping Dogs

But in Sleeping Dogs this can be done without much difficulty and noise:



We add that the AMD Radeon R7 260X and the new NVIDIA GeForce GTX 750 Ti “bring” 4-6%.

Hitman: Absolution



In Hitman: Absolution, despite the game engine being more suitable for AMD video cards, the NVIDIA GeForce GTX 750 Ti is not inferior to the Radeon R7 260X. In comparison with the MSI GeForce GTX 650 Ti BOOST, the new product also looks great, since in the nominal operating mode their performance is almost the same, and when overclocked, the GTX 750 Ti even manages to lead.

Crysis 3

Crysis 3 shows us a balance of power between video cards quite typical for today's testing:



The advantage of the NVIDIA GeForce GTX 750 Ti over the Radeon R7 260X at the nominal frequencies of these video cards is negligible, as is the gap between the overclocked GTX 750 Ti and the MSI GeForce GTX 650 Ti BOOST. In general, the performance of the tested video cards in Crysis 3 is disappointingly low.

Tomb Raider (2013)

Tomb Raider is more suitable for video cards based on AMD GPUs, which in this test are on par with equally priced GeForce, and the older R9 270 is in the lead:



When overclocked, the NVIDIA GeForce GTX 750 Ti is ahead of the BOOST version of the GeForce GTX 650 Ti.

BioShock Infinite



Of all the games tested, BioShock Infinite was noted for the fact that the performance increase of the NVIDIA GeForce GTX 750 Ti when overclocked is maximum here and reaches 21.6-23.8% compared to the nominal operating mode of the video card. Due to this, the new product is ahead of the MSI GeForce GTX 650 Ti BOOST in mode without anti-aliasing and is equal to it when MSAA4x is activated.

Metro: Last Light

Let us remind you that we tested Metro: Last Light without the “Advanced PhysX” technology and on simplified graphics quality settings. However, the performance of video cards in this game still leaves much to be desired:



NVIDIA GeForce GTX 750 Ti is again ahead of the Radeon R7 260X by 13-22%, 8% behind the MSI GeForce GTX 650 Ti BOOST and immediately 4-5% faster when overclocked. The excellent scalability of the new product when overclocked is again confirmed, despite the 128-bit memory bus and the modest number of shader processors on the GPU.

GRID 2



It is difficult to fight against AMD in GRID 2, but the loss of the NVIDIA GeForce GTX 750 Ti Radeon R7 260X is minimal, and overclocking takes the new product to second place after the Radeon R9 270 produced by HIS.

Company of Heroes 2

However, NVIDIA suffers the most crushing defeat in the game Company of Heroes 2:



Here we note that the NVIDIA GeForce GTX 750 Ti at nominal frequencies is on par with the MSI GeForce GTX 650 Ti BOOST, and it is logical that after overclocking to frequencies of 1155/6660 MHz it easily outstrips it.

Batman: Arkham Origins

The revenge was not long in coming - in Batman: Arkham Origins we can already see the picture familiar to today’s testing:




Battlefield 4

The results of testing video cards in the game Battlefield 4 do not fall out of the general range:



NVIDIA GeForce GTX 750 Ti is 5-7% faster than the Radeon R7 260X, about 6% slower than the MSI GeForce GTX 650 Ti BOOST and 6% faster when overclocked to 1155/6660 MHz.

At the end of the main testing section, there is a final table with test results:



Now we have pivot charts.

5. Pivot charts

First of all, let's compare the performance of NVIDIA GeForce GTX 750 Ti 2 GB and AMD Radeon R7 260X 2 GB at the nominal frequencies of these video cards. The results of the previously released R7 260X are taken as a basis, and the performance of the GTX 750 Ti is shown by deviations from it:


NVIDIA GeForce GTX 750 Ti loses in the anti-aliasing mode of the games BioShock Infinite, GRID 2 and Company of Heroes 2, where the defeat of the new product is most significant. At the same time, victories were won in Metro: Last Light, Total War: SHOGUN 2, Sniper Elite V2, Sleeping Dogs, mode without anti-aliasing of Hitman: Absolution, Batman: Arkham Origins and Battlefield 4. In the remaining games, the performance of these two video cards is practically no is different. If we display the geometric mean for all tests, then the NVIDIA GeForce GTX 750 Ti 2 GB is faster than the AMD Radeon R7 260X 2 GB by 5.6% in modes without anti-aliasing and by 2.5% when it is activated.

We have no doubt that the NVIDIA GeForce GTX 750 Ti is ahead of the GeForce GTX 650 Ti; in our opinion, it is much more interesting to compare the new product with an almost direct price competitor in the form of the GeForce GTX 650 Ti BOOST. There is a huge range of such video cards on the market now and almost all of them come with factory overclocking. We compared the GeForce GTX 750 Ti with the original MSI video card:


There is an almost total loss here, with the exception of one test mode of Hitman: Absolution and Company of Heroes 2, and on average for all tests the NVIDIA GeForce GTX 750 Ti is inferior to the original MSI GeForce GTX 650 Ti BOOST 8.6% in modes without AA and 12.1 % when activated.

As we remember, the very first copy of the NVIDIA GeForce GTX 750 Ti overclocked us very well. Base frequency The cores were raised by 135 MHz or 13.2%, and the video memory frequency by 23.3%. Let's see how the performance of this video card scales when overclocked:


It scales perfectly, what else can I say? In the worst case, productivity increases by 8.3%, and in the best case by 23.8%. On average, for all games in modes without anti-aliasing, the performance of the NVIDIA GeForce GTX 750 Ti increases by 12.9% when overclocked, and by 14.1% when anti-aliasing is enabled. However, modes using anti-aliasing are not acceptable for video cards with a 128-bit memory bus, since performance with such settings is very low.

And since the NVIDIA GeForce GTX 750 Ti overclocks and scales so well when overclocked, it’s time to check how it compares to the original MSI GeForce GTX 650 Ti BOOST at increased GPU and video memory frequencies:


As you can see, the overclocked NVIDIA GeForce GTX 750 Ti managed to turn the lag into an advantage almost everywhere. The exceptions were Sniper Elite V2, the unconquered Crysis 3 and Batman: Arkham Origins. Of course, the GeForce GTX 650 Ti BOOST also overclocks well.

6. Power consumption

The energy consumption of the system with various video cards was measured using the Zalman ZM-MFC3 multifunctional panel, which shows the system consumption “from the outlet” as a whole (excluding the monitor). The measurement was carried out in 2D mode, during normal work in Microsoft Word or “Internet surfing”, as well as in 3D mode. In the latter case, the load was created using four consecutive cycles of the introductory scene of the “Swamp” level from the game Crysis 3 in a resolution of 2560x1440 pixels with maximum graphics quality settings, but without using MSAA anti-aliasing.

Let's compare the power consumption of systems with video cards tested today:



As you would expect, the system with NVIDIA GeForce GTX 750 Ti turned out to be the most economical among all other test participants. The difference with the configuration in which the MSI GeForce GTX 650 Ti BOOST was installed was 72 watts at peak load, although it is clear that not all of this power is accounted for by the video card alone. However, it is obvious that the system with the new video card on the Maxwell GPU is more economical than its predecessors and competitors. When overclocking a video card, power consumption increases by 15 watts at peak load. In 2D mode, all options consume approximately the same electrical power. Let's also add that if you suddenly plan to install the NVIDIA GeForce GTX 750 Ti in a configuration with the same power as our test bench, then the 300-watt power supply recommended by NVIDIA will clearly not be enough even without taking into account overclocking the video card. But more modest configurations should most likely fit within the power limit specified by NVIDIA.

Conclusion

We can say with confidence that the new NVIDIA GeForce GTX 750 Ti based on the Maxwell graphics architecture has fully fulfilled its task. The GM107 turned out to be an extremely efficient chip with a level of performance per watt unsurpassed by its classmates. It's no joke, but in comparison with the GeForce GTX 650 Ti, which has a larger number of shader processors and texture units, power consumption has almost halved, and performance has increased. Moreover, despite the obvious power limitations, our video card overclocked perfectly, reaching and even surpassing the BOOST version of the GeForce GTX 650 Ti in the original MSI version. It is important that low consumption and heat dissipation will allow manufacturers to create compact and quiet video cards, but I would still like to get acquainted with video cards based on the second version of “Maxwell”.

At the same time, I would like to note that the once-usual increase in performance “for the same money” with the advent of the new generation of video cards and the GeForce GTX 750 Ti, in particular, has not yet happened. With its recommended price of $149, the new product enters the mid-budget segment of the video card market, which is extremely saturated with various original solutions. Here are classmates GeForce GTX 650 Ti (including BOOST versions), and Radeon R7 260X with R9 270, and the latest Radeon R7 265, as well as, albeit old, but still selling Radeon HD 7850. Let's hope that the retail price of the new product is not will turn out to be overestimated, since energy efficiency of users today “cannot be taken” by itself.

Author's subjective opinion: frankly speaking, the GeForce GTX 750 Ti, like the GeForce GTX 650 Ti, is not literally a gaming video card, and for its modest performance it turned out, to put it mildly, a little expensive. Ideally, I see the GeForce GTX 750 Ti with the same compact design, exclusively with a radiator without a fan, and a cost... within $100. These kind of video cards would definitely be a big success.

We thank the Russian representative office of NVIDIA
and personally Irina Shekhovtsova,
for the video card provided for testing
.


This review will examine in detail the relationship between technical specifications and performance of two popular entry-level video cards from Nvidia: GTX 750 Ti vs GTX 750. These are the first products from this manufacturer that are devices with Maxwell architecture.


The main feature of these devices is their low power consumption. However, at the same time, everything is excellent with the level of performance of these devices.

GTX 750 Ti and GTX 750: which niche do they belong to?

If we compare the technical specifications of the GTX 750 Ti and GTX 750 video cards with other budget solutions, we can conclude that they have almost the same level of performance. However, the main feature of these solutions is the high level of energy efficiency. To ensure normal operation of such a graphics accelerator, a power supply with a power of only 300 W will be sufficient. Power supply for basic modifications of these graphics accelerators is carried out only via the PCI Express 16 X connector. The need to connect to printed circuit board There are no additional power connectors for the video card. For this reason, the main niche of these graphics accelerators are compact computer systems with increased requirements for computer energy efficiency and a high level of graphics subsystem performance. However, it is not so rare to find these video cards in full-size entry-level desktop computers. These graphics accelerators are aimed specifically at the niche of various types of entry-level PCs.

GTX 750 Ti and GTX 750: technical specifications

These semiconductor solutions are manufactured according to 28 nm process standards. They are designed to use 1 or 2 GB GDDR5 RAM. The clock frequencies for these video cards are identical - 1020-1085 MHz. Thanks to support for proprietary GPU Boost technology, chips can change their frequency depending on the heating level and complexity of the task being performed at a given time. This improves the power consumption of the graphics accelerator. Both of these products are based on an architecture called Maxwell. Its key difference from the previous generation solutions is the redesigned logic control units and increased cache sizes. All this together allows the manufacturer to achieve a significant increase in productivity in combination with low level electricity consumption.

GTX 750 Ti and GTX 750: Performance Level

Various synthetic tests show a decent level of performance in the scores of these products. However real level performance can only be assessed in computer games. In this case, the main indicator is the number of frames per second. Below are some indicators for popular computer games. All figures are valid only for a resolution of 1920 by 1080 and maximum settings. The number of frames per second is sufficient for comfortable play in all games except Metro: Last Light. In the game "Metro: Last Light" you will have to reduce the output image parameters until 30 frames per second or more is reached.

GTX 750 Ti and GTX 750: overclocking potential

Comparing the GTX 750 Ti and GTX 750 is most interesting from the point of view of overclocking potential. It is worth noting one important feature: there are modifications of video cards whose names contain the prefix Ti. Unlike the basic version, they have an additional power connector. Overclocking the GTX 750 Ti, even in its normal version, can be done up to frequency range 1150 MHz and achieve a performance increase of up to 20%. If you use a model of the same video adapter with improved power supply, in some cases you can achieve a performance increase of 30%.

GTX 750 Ti and GTX 750: performance comparison with GeForce 560 Ti

In general, it is inappropriate to compare models below the GTX 560 Ti of the generation before last with new products. They are inferior to new models both in terms of performance and energy consumption. If we compare the GTX 750 Ti and the GTX 750, then in this case the advantage will be on the side of a budget video card with a more progressive architecture. In this case, the difference in performance level can be 3-5%. Energy efficiency in this case will differ several times.

GTX 750 Ti and GTX 750: comparison with GTX 650 TI and GTX 660

A comparison of the GTX 750 and GTX 650 Ti shows approximately the same level of performance. However, energy efficiency is on the side more new video card. According to this indicator, the difference between these two devices can reach 30-40 W. A similar situation is observed when comparing the GTX 750 Ti and GTX 660. In this case, the level of performance is quite comparable. However, the level of power consumption of a more recent video card is several times lower. Therefore, fresh solutions look more preferable in the eyes of a potential buyer.

GTX 750 Ti and GTX 750: comparison with more recent graphics accelerators

It's difficult to compare with each other graphic cards GTX 750 Ti and GTX 950. The cost difference between these solutions is quite significant - it is about $50. The more recent video card is a slightly stripped-down modification of the GTX 960 video card. In this case, the difference in performance can reach 40-50%. The good overclocking potential of a more recent solution could increase these numbers even further.

GTX 750 Ti and GTX 750: competitors from AMD

The review of video cards would not be complete if we overlooked solutions from AMD. In this case, the direct competitors of the heroes of this article are the Radeon R7-260, R7-260X, R9-270 and R7-265 video cards. These graphics cards have identical performance levels and approximately the same cost. Of greater interest is a comparison between the GTX 750 Ti and the R9 270. In this case, the video card from AMD is positioned as a middle-class solution. The GPU frequency is significantly lower and is only 950 MHz. The nVidia graphics accelerator, as noted earlier, has a frequency in the range from 1020 to 1085 MHz. This value varies depending on the complexity of the task being performed and how hot the GPU is. The solution from AMD has a significantly higher thermal package. It requires an additional power connector. In this case, the most interesting parameter is the RAM bus width. For the R9-270, the value of this parameter is 256 bits versus 128 bits for the GTX 750 Ti. However, in this case, the solution from Nvidia is significantly inferior to its main competitor in terms of performance and speed. A similar situation arises with R7-265. The same 265-bit RAM bus leaves no chance for the GTX-750 Ti. The presence of additional power and lower clock speeds indicate that AMD solutions can be overclocked. From a performance point of view, the latest manufacturer's products look more preferable. For compact personal computers with increased power consumption requirements, it would be wiser to use solutions from the manufacturer Nvidia.

GTX 750 Ti and GTX 750: cost

The basic version of the GTX 750 costs $120. This is the price the manufacturer gives for its device. In reality, you can find more affordable video cards of this model at a price of $135-140. The manufacturer priced a more advanced modification at $149. In fact, you can purchase such a product for 160-170 dollars. If for the GTX 750 this price is quite justified and looks very attractive compared to its competitors, then with the GTX 750 Ti everything is not so great. Due to the reduced cost of the R9 270 video card from a competing manufacturer, purchasing it looks more preferable. Of course, the energy efficiency of this processor solution turns out to be weaker, but in terms of performance it can easily outperform any hero in this review. Taking into account the lower clock frequency and increased overclocking potential this decision, the choice becomes obvious.

GTX 750 Ti and GTX 750: user reviews

If we compare the GTX 750 Ti and GTX 750 video cards, users highlight the improved specifications of the first solution. It has better overclocking potential and a larger number of units involved in the work. Accordingly, the performance of this model is higher. These adapters are unrivaled among solutions for compact computer systems entry level. Alternative option It's just not here. For full-size personal computers, it would be preferable to use the solution from AMD R9 270. Despite the fact that this chip is a little outdated, its speed and cost leave no chance even for the GTX 750 Ti. However, this product has long been discontinued. Today, only warehouse stocks can be found on sale. There will be no better entry-level solution than the GTX 750 Ti.

Conclusion

Comparison of such video cards as the GTX 750 Ti and GTX 750 shows the further development of Nvidia products. It consists in a further increase in performance and a significant reduction in energy consumption.

We're starting with Arma 3. This game is much more of a military simulator than a first-person shooter. But regardless of the game's realism (or perhaps because of it), it puts quite a strain on the graphics subsystem when using the most advanced features.



GeForce GTX 750 Ti significantly ahead of Radeon R7 260X, but does not reach the indicators GeForce GTX 650 Ti Boost And Radeon R7 265. Nvidia's first Maxwell-based card copes well with high detail settings at a resolution of 1920x1080 pixels with active 4x MSAA anti-aliasing. The frame rate remains above 36 FPS throughout the entire benchmark.


We've identified moderate frame rate spikes across different graphics cards in Arma 3, although they don't impact gameplay in any way. It's interesting that Radeon R7 265 exhibits higher fluctuations than competitors despite the high frame rate. The result was confirmed in several runs.

Assassin's Creed IV: Black Flag

Assassin's Creed: Black Flag is sponsored by Nvidia, so we are extremely interested in how this beautiful game GPUs the same company will show themselves in comparison with AMD chips.


On normal settings GeForce GTX 750 Ti allows you to play without problems, but it still remains closer to the end of the list. The new product is slightly behind GeForce GTX 650 Ti Boost and noticeably overtook GeForce GTX 650 Ti, which is intended to replace.


We're seeing low frame-time fluctuations in this game. Nevertheless, GeForce GTX 750 Ti showed several jumps. Naturally, it is worth considering that this is the first performance of the Maxwell architecture, and it is quite possible that raw drivers are to blame for this. Let's hope that over time Nvidia engineers will sort out the minor shortcomings.

Battlefield 4

IN Battlefield 4 We can select the Ultra settings, although this required reducing MSAA to 2x and global illumination to SSAO at 1920x1080 pixels.


GeForce GTX 750 Ti definitely gets the job done, although sometimes its framerate drops below 30 FPS. In any case, she reached the finish line almost at the same time as GeForce GTX 650 Ti Boost .


At this level of detail, three of the maps in the sample suffer from frame time spikes, and GeForce GTX 750 Ti is one of them. The remaining two are Radeon R7 260X And Radeon HD 7850, equipped with 1 GB of video memory.

BioShock Infinite

GeForce GTX 750 Ti has a high enough level of performance to play BioShock Infinite on ultra settings with a resolution of 1920x1080 pixels.


By average frame rate GeForce GTX 750 Ti equaled GeForce GTX 650 Ti Boost and significantly overtook it in terms of the minimum.


The Unreal engine at the heart of BioShock is well optimized, and frame-time fluctuations are very small compared to other games.



CONTENT