Categories

In fact, ray tracing has only just begun

What is ray tracing and how is it different from gaming?

What is ray tracing and how is it different from gaming?

⌨️

Nvidia’s Sol demo made extensive use of real-time ray tracing.

Nvidia has made an almost single-minded effort to push the adoption of ray tracing, and many game developers and engine creators are heeding the call. A driver update to enable ray tracing support on GTX cards means millions of consumers and some of the best graphics cards have been added to the theoretical pool of ray tracing adopters. At the same time, the number of games utilizing ray tracing is starting to far exceed the number available soon after the RTX series launched.

When trying to explain a relatively complex rendering technique to the layman, it’s easy to oversimplify, especially if you’re in charge of marketing and selling the technique. But a basic explanation of what ray tracing is and how it works often overshadows some of its most important applications — including some that position ray tracing as the next big revolution in graphics.

We wanted to dig into not only what ray tracing really is, but also specific ray tracing techniques and explain how they work and why they are important. Our goal is not only to gain a deeper understanding of why ray tracing is important and why Nvidia is working so hard to support it, but also to show you how to identify the difference between a scene with and without ray tracing, and highlight exactly why scenes with it enabled looks better.

A brief introduction to computer graphics and rasterization

Creating a virtual simulation of the surrounding world that looks and behaves normally is an extremely complex task—so complex, in fact, that we’ve never actually tried to do it. Forget things like gravity and physics for a moment and think about how we see the world. An effectively infinite number of photons (beams) move rapidly, reflecting off surfaces and passing through objects, all based on the molecular properties of each object.

Trying to simulate “infinity” with limited resources (like the computing power of a computer) is a recipe for disaster. We need clever approximations, and that’s how modern graphics rendering (in games) currently works.

We call this process rasterization, and instead of looking at infinite objects, surfaces, and photons, we start with polygons, especially triangles. Games have gone from hundreds of polygons to millions, and rasterization turns all this data into 2D frames on our monitors.

It involves a lot of math, but the short version is that rasterization determines which part of the display each polygon covers. At close range, a triangle may cover the entire screen, while at a distance and at an angle, it may cover only a few pixels. Once the pixels are determined, things like textures and lighting need to be applied.

Doing this for every polygon every frame is wasteful, as many polygons end up hidden (ie, behind other polygons). Over the years, technology and hardware have improved, rasterization is faster, and modern games can take millions of potentially visible polygons and process them at breakneck speed.

We’ve moved from primitive polygons with “fake” lights (eg primitive Quake) to more complex environments with shadow maps, soft shadows, ambient occlusion, tessellation, screen space reflections and other graphics techniques in an attempt to create more Good things should look similar. This can require billions of calculations per frame, but with modern GPUs capable of processing teraflops of data (trillions of calculations per second), it’s a tractable problem.

What is ray tracing?

Ray tracing is a different approach, first proposed by Turner Whitted in 1979 in “An Improved Lighting Model for Shadow Display” (PDF online). Turner outlines how to calculate ray tracing recursively to end up with impressive images that include shadows, reflections, and more. (Not coincidentally, Turner Whitted now works for Nvidia’s research division.) The problem is that doing so requires more complex calculations than rasterization.

Ray tracing involves tracing the paths of rays (beams) in the 3D world. Casts a single pixel ray into the 3D world, finds which polygon the ray hits first, and shades it appropriately. In practice, you need more rays per pixel to get good results, because once the ray intersects the object, you need to calculate the light source (more rays) that can reach that point on the polygon, and according to the polygon’s properties (is high reflective or partially reflective, what color is the material, is it flat or curved, etc.).

To determine the amount of light falling from a light source onto a single pixel, the ray tracing formula needs to know how far away, how bright, and the angle of the reflective surface relative to the angle of the light source, before calculating how hot the reflected ray should be. The process is then repeated for any other light sources, including indirect lighting from light reflected by other objects in the scene. Calculations must be applied to materials, determined by their diffuse or specular reflectance levels or both. Transparent or translucent surfaces, such as glass or water, refract light, and to add to the headache, everything necessarily has an artificial reflection limit, because without one, rays can be traced to infinity.

Image 1 of 6Image 2 of 6Image 3 of 6Image 4 of 6Image 5 of 6Image 6 of 6

According to Nvidia, the most commonly used ray tracing algorithm is BVH Traversal: Bounding Volume Hierarchy Traversal. That’s what the DXR API uses, and what Nvidia’s RT cores accelerate. The main idea is to optimize the ray/triangle intersection calculation. Take a scene with thousands of objects, each of which may contain thousands of polygons, and try to figure out which polygons a ray intersects. It’s a search problem that takes a long time to brute force. BVH speeds up this process by creating a tree of objects, where each object is surrounded by a box.

Nvidia provides an example of the above ray intersecting the rabbit model. At the top level, a BVH (box) contains the entire rabbit, and the computation determines that the ray intersects the box – if not, no more work is required for the box/object/BVH. Because of the intersection, the BVH algorithm gets a smaller set of boxes for the intersecting objects – in this case, it determines that the ray in question has hit the rabbit object’s head. Additional BVH traversal happens until eventually the algorithm has a short list of actual polygons, which it can then check to determine which rabbit polygon the ray hits.

These BVH calculations can be done using software running on a CPU or GPU, but dedicated hardware can speed up the calculations by an order of magnitude. The RT core on Nvidia’s RTX cards appears as a black box that takes a BVH structure and rays, loops through all the dirty work, and spits out the desired result (if any) for intersecting the polygon.

This is a non-deterministic operation, which means there is no way of saying exactly how many rays per second can be computed – it depends on the complexity of the scene and the BVH structure. Importantly, Nvidia’s RT cores run BVH algorithms ten times faster than their CUDA cores, which may be ten times (or more) faster than running on a CPU (mainly due to the number of GPU cores) compared to the CPU compared to the kernel).

How much light is “enough” per pixel? It varies from person to person – flat, non-reflective surfaces are easier to work with than curved, shiny surfaces. If light bounces between highly reflective surfaces (for example, a hall of mirror effect), hundreds of rays may be required. Doing a full ray tracing of the scene can result in dozens or more ray calculations per pixel, using more rays gives better results.

Despite its complexity, nearly every major film today uses ray tracing (or path tracing) to produce highly detailed computer images. A full 90-minute 60fps movie requires 324,000 images, and each image can take hours of computing time. How is it possible for a game to do all this in real-time on a single GPU? The answer is they won’t, at least not at the resolution and quality you see in Hollywood movies.

Enter hybrid rendering

For over 20 years, computer graphics hardware has focused on rasterizing faster, and game designers and artists are very good at producing impressive results. But certain things are still problematic, such as proper lighting, shadows, and reflections.

Hybrid rendering uses traditional rasterization techniques to render all polygons in a frame, then combines the results with ray tracing where it makes sense. Ray tracing eventually became less complex, allowing for higher frame rates, although there was still a trade-off between quality and performance. Casting more light into the scene can improve the overall effect at the expense of frame rate, and vice versa.

To gain a more complete understanding of how ray tracing enables hand vision, and the computational cost involved, let’s dig into some specific RT techniques and explore their relative complexity.

think

Previous mirroring and reflection techniques, such as Screen Space Reflection (SSR), have a number of drawbacks, such as being unable to display objects that are not currently in the frame or that are partially or fully occluded. Ray traced shadows apply to the full 3D world, not just what is visible on the screen, allowing for proper reflections.

A simple ray traced reflection version was originally used in Battlefield 5. After a frame is rendered, one ray per pixel will be used to determine which reflections (if any) should be visible. However, many surfaces are not reflective, so this results in a lot of extra work. The optimized BF5 algorithm only computes rays for pixels on reflective surfaces, using one ray for every two pixels on partially reflective surfaces.

Because many surfaces don’t reflect at all, the effect is often less dramatic, but it’s also less expensive in terms of processing power. Computationally, ray tracing reflections requires at least one ray per reflection pixel, possibly more if simulating bounces (this is not the case in BF5). The Reflections demo, powered by Unreal Engine, shows stormtroopers in polished white and mirrored silver armor, using complex lighting in hallways and elevators. This requires more light and computing power, which explains why a single RTX 2080 Ti can only manage 54fps at 1080p and 85fps at 1080p in BF5.

shadow

As with reflections, there are various ways to simulate shadows using ray tracing. A well-understood implementation of ray-traced shadows requires casting a ray from each pixel surface for each light source. If the ray intersects the object before hitting the light source, the pixel will be dimmer. The distance to the light source, the brightness of the light, and the color of the light are all taken into account.

However, this is just the tip of the iceberg. Ray tracing provides developers with a wide range of shadow rendering tools, from the simple techniques I’ve described to very sophisticated methods that get very close to our real-world perception. The ray tracing implementation in Shadow of the Tomb Raider (which appeared in a delayed patch after release) is a great example of the breadth and depth of options.

Bird shadows rendered using shadow maps and ray traced shadows

There are many potential types of light sources: Point lights are small omnidirectional light sources such as candles or a single light bulb. Rectangular spotlight shadows, like neon lights or windows, behave differently. They are from a larger…

Discover more articles in our categories Gaming & News ou encore Anime.

Thanks for visiting we hope our article What is ray tracing and how is it different from gaming?

, we invite you to share the article on Facebook, twitter and e-mail with the hashtags ☑️ #ray #tracing #gaming ☑️!

Wilbert Wood
Games, music, TV shows, movies and everything else.