[ad_1]
This year, AMD finally managed to bring the Navi graphics accelerators to the market, the release of which was originally scheduled for 2018. The chipmaker used a time-tested strategy, starting with the release of the mid-range graphics cards Radeon RX 5700 and RX 5700 XT. The novelties proved to be worthy rivals of the younger GeForce RTX, becoming an attractive alternative for those who were not impressed with the effects based on ray tracing.
Nvidia hasn’t been sitting idly by either. It responded to the release of AMD Navi graphics adapters with the release of GeForce Super cards, which have a more attractive price-performance ratio than previous models. Quite unexpected was the decision of the “greens” to make friends with their video cards with VESA Adaptive-Sync technology, and in the spring, GeForce GTX video adapters acquired DirectX Raytracing support. These and other topics will be covered in the second part of the material on the results of the outgoing year in the world of hardware.
Entry ticket to the world of ray tracing
Nvidia’s plans to release the GeForce RTX 2060 graphics adapter along with the GeForce RTX line of mobile accelerators became known last year. Both announcements took place as part of the January CES 2019 exhibition, at which the non-replaceable captain of the “green” team Jensen Huang spoke. With a MSRP of $350, the GeForce RTX 2060 is the most affordable graphics card that supports the Nvidia RTX technology suite.
For the new video adapter, the company did not develop a separate GPU, deciding to use the 12nm TU106 crystal, already familiar from the GeForce RTX 2070. Of the 2304 physically present CUDA cores in the new chip configuration, only 1920 units remained active, and the number of tensor and RT cores was reduced to 240 and 30 pieces respectively.
Moreover, the memory bus also fell under the knife, which was cut from 256 to 192 bits by disabling a pair of 32-bit controllers. The video buffer is represented by six gigabytes of GDDR6 with an effective frequency of 14 GHz, the memory bandwidth is 336 GB / s.
As independent tests have shown, the GeForce RTX 2060 corresponds in terms of performance to the GeForce GTX 1070 Ti, and its main disadvantages include the presence of 6 GB of a video buffer and rather modest performance in games with DirectX Raytracing, which does not allow you to enjoy the “rays” at high fps in resolutions greater than Full HD.
If we talk about the line of GeForce RTX accelerators for laptops, then it is represented by three solutions: GeForce RTX 2060, RTX 2070 and RTX 2080. They use the same GPU modifications as in desktop counterparts, all the differences from which come down to more modest operating frequencies . Due to this, the nominal power consumption of mobile GeForce RTX is significantly less than that of desktop relatives. For example, the GeForce RTX 2080 Max-Q (GPU frequency from 735 to 1095 MHz) consumes only 80 watts, while the “appetite” of the desktop version (from 1515 to 1710 MHz) is 215 watts.
FreeSync to the masses!
Back in 2014, VESA made the open-source Adaptive-Sync (AMD FreeSync) technology part of the DisplayPort standard, which did not stop Nvidia from further promoting its own proprietary G-Sync development. Both approaches have the same goal – to remove tearing or twitching of the image on the screen by synchronizing the monitor’s refresh rate with the frame rate. The main difference between Nvidia technology lies in a special hardware module, which leads to a rise in the cost of compatible monitors.
Therefore, the decision of the “green” to endow their video cards with support for VESA Adaptive-Sync, albeit with some restrictions, was a pleasant surprise. Starting with the GeForce 417.71 WHQL driver, this technology can be used by owners of devices based on GPU Pascal (GeForce 10 series) and newer.

In characteristic fashion, Nvidia avoids direct mention of the names VESA Adaptive-Sync or AMD FreeSync in every possible way. Instead, the “greens” prefer to use the term G-Sync Compatible. It is used both to indicate the appropriate mode of operation and to mark displays that have been tested in the Nvidia lab. In monitors marked G-Sync Compatible, according to the “green”, the quality of adaptive sync is practically not far behind the proprietary G-Sync.

Support for VESA Adaptive-Sync this year also got Intel graphics cores. The “blue” giant announced plans to adopt this technology back in 2015, but it was only now that they were brought to life. Adaptive-Sync is supported by Gen11 GPUs integrated into 10nm Ice Lake processors. Unfortunately, Intel Gen9.5 graphics, familiar, for example, from Coffee Lake-S desktop chips, cannot boast of this.
The first 7nm GPU “for games”
A very unexpected event at CES 2019 was the presentation of the Radeon VII graphics card, which AMD proudly calls “the world’s first 7nm gaming GPU.” The novelty is based on the 7nm Vega 20 core with 3840 stream processors, and its video buffer is represented by four HBM2 chips with a bandwidth of 1 TB / s and a total volume of 16 GB. At first glance, the Radeon VII looked like a video card with a huge reserve for the future, but in reality everything turned out to be not so rosy.

The fate of AMD Radeon VII was sealed even before it was announced. It is the latest project of Mike Rayfield (Mike Rayfield), former head of the Radeon Technologies Group. He held this post for less than a year and was remembered by his former colleagues for irrational steps in business development.
Roughly speaking, Radeon VII is a professional accelerator Radeon Instinct MI50, adapted for gaming needs. Initially, AMD did not plan to use the 7nm Vega 20 GPU in consumer cards due to its high cost.
Only HBM2 chips cost “red” about $320, that is, almost half the recommended price of Radeon VII ($700). As tests have shown, it is a rival to Nvidia GeForce RTX 2080 and GeForce GTX 1080 Ti video cards. The latter, we recall, came out two years earlier with the same recommended price.
The release of Radeon RX 5700 adapters with RDNA architecture actually put an end to Radeon VII. In particular, the older Radeon RX 5700 XT demonstrates similar performance in many games, but is available abroad at a price of $399. As the saying goes, why pay more?

AMD Vega 20 7nm video cores have also been used in Radeon Pro Vega II accelerators from the Apple Mac Pro 2019 workstation. Of particular interest is the “two-headed” modification of the Radeon Pro Vega II Duo. It has 2x 4096 stream processors and 2x 32 GB HBM2 video memory, and uses the Infinity Fabric bus to connect individual GPUs. The raw processing power in single precision operations is 28.3 Tflops.

The devices have a proprietary MPX Module form factor, which is reflected not only in the unusual layout of the printed circuit board, but also in the design of the cooler. The cooling system for video adapters is represented by a massive radiator, which is blown by fans in the front of the Mac Pro.
“Just Madness”
In the first half of the year, the Nvidia Turing architecture made its way to GeForce GTX 16-series cards. For them, the chipmaker has prepared 12-nm TU116 and TU117 GPUs, devoid of blocks responsible for DLSS intelligent anti-aliasing and ray tracing hardware acceleration. This made it possible to reduce the area of the GPU chip and make the final product more accessible to the average consumer.

Nvidia TU116 GPU Schematic
The first-born of the GeForce GTX 16 series was the GeForce GTX 1660 Ti video card, based on a full-fledged TU116 chip. It operates with 1536 CUDA cores, a 192-bit memory interface and six gigabytes of GDDR6. It was followed by the debut of the GeForce GTX 1660, in which the number of stream processors was reduced to 1408 units, and instead of GDDR6, slower GDDR5 chips are used with the same 192-bit bus.
Finally, in April, the GeForce GTX 1650, based on the Nvidia TU117 chip, saw the light. She is content with 896 CUDA cores, a 128-bit bus and four gigabytes of GDDR5 memory. Its main “chip” is low power consumption (about 68 W), which means it can do without an additional power connector.
Shortly after the release of the GeForce GTX 1660, the green team decided to endow all more or less current GeForce GTX 3D accelerators with DirectX Raytracing (DXR) support. With this move, owners of Nvidia Pascal and Turing cards with six gigabytes of memory can experience the beauty of ray-traced effects in person.

Of course, without special blocks responsible for hardware acceleration of ray tracing, the performance of such adapters leaves much to be desired. For example, the GeForce GTX 1080 Ti, when activated with DXR, is unable to provide the frame rate of the GeForce RTX 2060, the most affordable graphics card with RT cores.

Navi, where were you?
In mid-summer, Advanced Micro Devices launched the first graphics cards of the Navi family: the Radeon RX 5700 and Radeon RX 5700 XT. In them, the company used the Radeon DNA graphics architecture, which, although it is the successor to the good old Graphics Core Next, was created with an eye on 3D rendering, and not on general-purpose calculations. For the GPGPU segment, the company has Vega graphics cores, which perfectly cope with the role of “number grinders”. For reference, the processing power of the Radeon Instinct MI60 accelerator on the 7nm Vega 20 GPU is up to 14.7 Tflops in FP32 operations.
The Radeon DNA architecture is highly scalable, allowing it to be used not only in discrete GPUs, but also in the video cores of future Sony and Microsoft game consoles, AMD Ryzen APUs, and even Samsung mobile SoCs. The first RDNA was the 7nm Navi 10 chip, which served as the basis for the aforementioned Radeon RX 5700 series cards. Physically, it includes 2560 stream processors, 160 texture units, 64 render units and communicates with the video buffer via a 256-bit bus.

With the release of the Radeon RX 5700, the era of GDDR6 memory began for AMD. In addition, they are the first consumer video cards with PCI Express 4.0 x16 interface. True, the good old PCI-E 3.0 x16 is enough for them to unlock their potential.
Debuted in the summer, AMD Navi 10 accelerators proved to be strong rivals to the GeForce RTX 2070 and RTX 2060 and even pushed Nvidia to release Super versions of these video cards. The Radeon RX 5700 and Radeon RX 5700 XT are recommended for gamers who play at 1440p and are skeptical of Nvidia’s RTX technology suite. Silence connoisseurs and overclockers should refrain from buying reference models equipped with a noisy “turbine”.

Fun fact: the Radeon RX 5700 XT was originally called the Radeon RX 690. This was revealed at the launch of the Radeon RX 5700 XT 50th Anniversary Edition. In the materials devoted to it, the inscription “Radeon RX 690 Limited Edition” flaunted on the fan, which indicates a change in the marking scheme at the very last moment. It is still unknown what exactly AMD was guided by when changing the name of the new product.

The Incredibles
The release of AMD Navi graphics accelerators did not go unnoticed by the “green” camp. Jensen Huang’s team responded to the Radeon RX 5700 (XT) with the release of a trio of GeForce RTX Super adapters, and the GeForce GTX 1660 Super and GTX 1650 Super were created to effectively compete with the Radeon RX 5500. The new graphics cards differ from their predecessors in GPU configuration, memory, and the best performance-per-dollar ratio.

The GeForce RTX 2080 Super features a full-featured Nvidia TU104 chip with 3072 CUDA cores, as well as GDDR6 memory with an effective frequency of 15.5 GHz. Well, the GeForce RTX 2070 Super model uses the TU104 rejection in a configuration with 2560 Turing stream processors. The recommended cost of video cards at the same time remained at the same level as that of its predecessors – 699 and 499 dollars, respectively.
Meanwhile, the GeForce RTX 2060 Super is more of a stripped-down GeForce RTX 2070 than an updated GeForce RTX 2060. It not only increased the number of CUDA cores to 2176 units, but also acquired eight gigabytes of GDDR6 memory with a 256-bit bus. Thanks to this, in games, the novelty provides almost the same frame rate as the retired GeForce RTX 2070. The GeForce RTX 2060 Super was priced at $399, $50 more than the previously released GeForce RTX 2060.
Knowing about the upcoming release of low-cost graphics cards of the Radeon RX 5500 series, Nvidia decided to launch a preemptive strike by expanding the lineup of GeForce GTX cards. In October, the GeForce GTX 1660 Super saw the light, and a month later, the GeForce GTX 1650 Super hit the store shelves. Both devices received high-speed GDDR6 memory and are based on the 12nm Nvidia TU116 core.
In the case of the GeForce GTX 1660 Super, the GPU configuration is completely inherited from the GeForce GTX 1660, and the performance gain was obtained by increasing the memory bandwidth. According to this indicator, it even bypasses the GeForce GTX 1660 Ti: 336 GB / s versus 288 GB / s. As a result, the GeForce GTX 1660 Super lags behind it by only 5%, but at the same time it turns out to be $50 cheaper ($229 instead of $279).
ASUS TUF Gaming GeForce GTX 1650 Super OC
Perhaps the most interesting representative of the “incredibles” is the GeForce GTX 1650 Super. At only $10 more than the recommended price, this video card outperforms the GeForce GTX 1650 by ~40% and is a direct competitor to the Radeon RX 5500 XT. In it, Nvidia used discarded TU116 dies with 1280 CUDA cores and a 128-bit memory interface, compensating for the narrow bus with GDDR6 chips. During testing, the GeForce GTX 1650 Super proved to be the best graphics card for 1080p gaming and claims to be a truly “people’s” Nvidia Turing.
Radeon RX 5500
Finally, in December, AMD, together with partners, brought to market the Radeon RX 5500 XT 3D accelerators based on the 7nm Navi 14 video core. The Reds began preparing for its debut back in October, deciding to test the strength of the new GPU in the OEM segment. This gives builders of finished computers access to the Radeon RX 5500 graphics card, and vendors have additional time to prepare their own modifications of the Radeon RX 5500 XT.
The Radeon RX 5500 XT graphics cards use a stripped-down version of the Navi 14 die in a configuration with 1408 RDNA stream processors. The video buffer is represented by four or eight gigabytes of GDDR6 memory with a 128-bit bus. By the way, the full version of the new GPU is used only in the Radeon Pro 5500M accelerator for the Apple MacBook Pro 16 laptop, but we will not dwell on it today.
In terms of gaming performance, the Radeon RX 5500 XT graphics cards can be called a power-efficient alternative to the Radeon RX 580 and RX 590 adapters. It is not surprising, because the latter are based on overclocked Polaris GPUs, manufactured according to less thin technological standards. However, those gamers who are not afraid of the “gluttony” of these video cards can now purchase the Radeon RX 580/590 at a very attractive price.

In a direct battle between the GeForce GTX 1650 Super and the Radeon RX 5500 XT, there is no clear winner: the video cards show similar performance and are priced approximately the same. In other words, when choosing between them, you should pay attention to the features of specific models, for example, the cooling system or the design of the printed circuit board. In the end, you can always support a more attractive GPU developer with a hryvnia.
In tomorrow’s article, we will try to recall the most interesting releases and events of the last 12 months from other areas of the computer hardware market.
Overview of the main events of 2019. Processors and platforms
Overview of the main events of 2019. Memory, overclocking and new trends
[ad_2]