Gamescom has concluded with a lot of games, hardware announcements and reveals from almost all big companies. While we have much software related (Games) discussion, most of our interest falls into the Hardware side of the Gamescom. A day before Gamescom Nvidia announced their next-gen Graphics hardware with a promise that they will not only have the generational leap over the Pascal Graphics card but also added benefits of the new architecture and some new “features.”
Nvidia’s CEO Jensen Huang said that this is the most ambitious reveal in the Graphics card market since the reveal of CUDA ten years ago. The presentation was mainly focused on the new features that will arrive with the RTX series and will be exclusive to only these Graphics cards due to architectural requirements. We’ll try to get into the details of these new Graphics hardware and talk about the prominent features that Nvidia boasted throughout their presentation. These are only the initial words as we don’t have the Graphics cards in our hands, the following assertions will be based on Nvidia’s stats and numbers.
RTX 2000s
Starting with the specifications of the new Graphics cards. The Nvidia GeForce RTX 2000 series is based on the Turing architecture that they revealed during SIGGRAPH graphics conference.
Nvidia GeForce RTX 2080Ti
Unconventionally Nvidia revealed the series flagship the RTX 2080Ti at the time of the series reveal which made a total of three Graphics card to talk about. Starting with the RTX 2080Ti, the incremental benefits include 4,352 CUDA cores clocked at 1350MHz base clock speed and 1545MHz Boost clock speed, while the founder’s edition Graphics card will have a factory overclocked speed of 1635MHz. The frame buffer now uses the more efficient GDDR6 module made by Samsung or Micron with 11GB memory capacity and 352-bit BUS, which makes the overall memory bandwidth to be 616 Gbps.
Additionally, it comes with the virtual link/universal USB C connector, Display port 1.4 and HDMI 2.0b connector for I/O purposes. Lastly, the RT cores present on the chip provides Ray tracing performance of 10 Gigarays/sec. It requires a dual eight pin connector to get the required 250 Watt TDP for the stock card and 265 Watt TDP for the Factory overclocked FE card.
Nvidia GeForce RTX 2080
Then we have the cut-down version of the RTX 2080Ti, the Nvidia RTX 2080 the conventional first reveal of the new series. Similar to the RTX 2080TI it is based on the Turing architecture, built with TSMC’s 12nm FinFET process. Due to the better fabrication process, they were able to crank 2,944 CUDA cores along with tensor cores for AI rendering purposes and RT cores for Raytracing. The base clock speed is 1515MHz, and 1710MHz is the boost clock speed of CUDA cores, while the factory overclocked FE cards will have 1800Mhz clock speed.
The capacity of the frame buffer remains the same that is 8GB, but the module is now GDDR6 from either Samsung or Micron with memory BUS of 256 bits, which makes the memory bandwidth to be 448 Gbps. The RT cores present on the chip provides raytracing performance of 8 Gigarays/sec. The Graphics card requires 215 Watts from the socket, and thus there is a dual eight pin power connector on the right side (when the card is fans are facing upwards).
Nvidia GeForce RTX 2070
Lastly, we have the best price to performance ratio underdog the Nvidia RTX 2070 Graphics card. The xx70 Graphics card is for the higher mid-end consumers who do not want to spend a fortune to get the Graphics quality they need. The RTX 2070 comes with a total of 2,304 CUDA cores clocked at 1410MHz with the boost clock of 1620MHz and the FE overclock speed of 1710MHz. Additionally, there are 64 rendering output units and 144 Tensor management units to help the AI functionality and rendering capability of the Graphics card.
The VRAM used is the GDDR6 module from Samsung or Micron with the BUS size of 256 bit making the bandwidth to be same as the RTX 2080. There are less RT cores as compared to the RTX 2080Ti making the raytracing abilities of the Graphics cards to be equal to 6 Gigarays/sec. To power the Graphics card there is a single eight pin connector that will deliver the required 185 Watt TDP from the socket.
What is Raytracing?
If you have seen the presentation delivered by Nvidia’s CEO Jensen Huang, you may notice that most of the presentation was revolving around the Ray Tracing ability of the new Turing architecture. Now, the question arises what is Raytracing and why is Nvidia relying so much on it for its future Graphics cards? Raytracing in a very crude form is the forward tracing of the rays that come back to the screen or your eyes. Raytracing since its very early concept has been called the “Holy grail” of graphics that has not been done in real time as of yet.
The original Ray tracing that was called the Holy grail of Graphics was actually tracing all rays that are in the scene, and it requires processing power that cannot be supplied even now, the first picture that was created using the raytracing algorithm was only a 4×4 (4 pixels wide and 4 pixels in length) image, and it required hours of processing.
Real-time Raytracing
During the presentation, they said that the video that was showcased during the DXR release was rendered on a supercomputer that has 4 Nvidia Tesla GV100 Graphics card ($10,000 is the cost of one piece) and the maximum framerates it could get were 24 and the total cost of the supercomputer was $64,000. It was an achievement as the system was handling almost all light rays in the scene which had many area lights (area light is the term for infinite light rays source), but the framerates were too low for real-time usage such as games. To tackle the situation, Nvidia altered the raytracing algorithm and made it such that only the rays that are reaching the eyes or the screen which makes the infinite rays finite, and the RT cores can trace the rays back onto the reflective or refractive or absorptive surface.
Visuals
Now how does the ray tracing implement in games, Nvidia had a whole demo of showing how does it increases the image quality in different games. The most prominent scene that awed the community was from the upcoming EA’s first-person shooter game Battlefield V. As you can see from the attached image when you turn off the Ray tracing abilities of the graphics card it can only offer dynamic lighting which does not have any reflection at all and the image even though is rendered at 4k resolution looks kinda dull.
In comparison when the RTX is turned on, you can see that the reflections of flames from the blast that is happened in the background on the reflective surface of the car. To some, it may not seem that big of the difference as it does not affect the performance of the Graphics card anyway, but the image that you will get will be as real as it is happening in front of your eyes. The difference in image quality is on such a level that without the RTX enabled the scene looks dead and unreal. However, the performance is slightly compromised with RTX.
Why AI?
We have not seen any use of Artificial intelligence in gaming industry yet, through its new Turing architecture Nvidia is introducing AI in games. They recently released a new type of Anti-aliasing which will rely mostly on the Artificial intelligence; unsurprisingly it is called DLAA (Deep learning Anti-aliasing). It works on almost closely like the supersampling technique that is widely used to covert a high-resolution image to a slightly lower resolution image with almost the same sharpness. DLSS does exactly opposite, it takes a lower resolution image and outputs a higher resolution image.
DLSS
During the presentation, Mr. Huang explained how it works. They have supercomputers that process these images hundreds and thousands of time until it outputs the correct image, and thus they train the neural networks. A simple computer cannot do such a complicated task so the feature will be updated constantly throughout the lifecycle of the RTX Graphics cards. They provide all types of images to the network and let it decide where it should place the pixel of right size, shape, and color, and it learns gradually, it takes hundreds, and millions try to process the image and guess the right pixel that will fit in the right place. Although the feature will not tax the CUDA cores as much as the other aliasing algorithms do, it will certainly affect the performance of the Graphics as it is the case with every anti-aliasing technique.
Verdict
The new graphics cards have not only the incremental boost over the last gen Graphics cards but also the new features based on the Turing architecture. Nvidia lacks the numbers this time, they are mostly relying on the new features, but they sparingly talked about the general performance boost that we are getting from these Graphics cards. The leap they showed during their presentation showed RTX 2070 way ahead of the GTX 1080TI, however, on paper GTX, 1080Ti looks stronger than the RTX 2070.
We have the new performance measurement “RTX-OPS” but what about TFLOPS single precision and double precision? They are faster Graphics cards, but it does not look like these graphics are two times faster than their respective Pascal counterparts. We are not sure until we test the Graphics card physically. So, the question remains should you wait for the RTX series or get the Graphics card present in the market? It will become black and white soon enough, for now, check out our list of best Graphics cards of 2018.