Talk:Path tracing

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Issues with the Description Section[edit]

The final part of the Description section discusses phenomena that simple Path Tracing can have a hard time reproducing. It claims that all three of these phenomena are hard because they are violations of the three principles described earlier, but only one of them (subsurface scattering) is.

Caustics are difficult to render with unidirectional path tracing because they involve sampling a spiky unknown pdf, but a unidirectional path tracer will converge on a scene with caustics eventually. This is fundamentally an issue with unidirectional path tracing as a method of evaluating the rendering equation, not with the equation itself or the principles it is derived from.

The last bullet contains several wavelength-dependent effects. All of these require a spectral path tracer to model accurately, but only one violates one of the principles. The first, chromatic aberration, is a part of the camera model and doesn't really have much to do with the rendering equation or path tracing as a method of evaluating it. Chromatic aberration, as well as other camera effects are absolutely a thing in path tracing renderers. Fluorescence is actually something that standard path tracers have a problem with, because it violates principle III. Specifically, the standard formulation of BRDFs assumes that light never enters with one wavelength and leaves with another. I believe it also violates assumptions for many rendering techniques because it is not reversible, but I'm not 100% certain on that one. Iridescence is caused by thin-film interference (or sometimes other structural coloration effects), and is captured by the rendering equation easily as long as you don't actually model the microscopic details with real scene geometry. I think the thing the article was trying to get at is that wavelength-dependent effects cannot be reproduced accurately by a path tracer representing colors with tristimulus vaules (usually RGB), but this has nothing to do with the rendering equation or the three principles described above, and is not a fundamental limitation. Many production path tracers represent color as spectral power distributions, usually by making wavelength a dimension of sample space. This approach is slower to converge (because of the added dimension), but there are well known techniques to improve this such as Hero Wavelength Sampling.

I'm new to wikipedia editing, and I'm not sure what the best way to go about fixing this is, but here is what I think should happen:

  • There needs to be a better distinction between path tracing as an algorithm to evaluate the rendering equation and the rendering equation itself
  • Discussion of limitations of the rendering equation should probably just point to Rendering equation#Limitations.
  • Mentioning radiosity as an alternative is good, but some other non-path-tracing approaches (photon mapping at the very least) should probably be mentioned as well.
  • If photon mapping is mentioned here, there should probably be a discussion about bias.
  • Maybe a discussion about the path integral formulation from Veach's thesis, since this is important to understanding BDPT. I'm not sure about whether it belongs on this page.

Benjamin-f-lee (talk) — Preceding undated comment added 12:48, 9 February 2021 (UTC)[reply]

Added images[edit]

Added some images. Are there any other helpful effects to demonstrate that aren't shown presently? Qutorial (talk) 02:17, 8 July 2016 (UTC)[reply]

Pseudocode[edit]

Why is the cos(theta) term being multiplied with the reflectance twice in the pseudocode? It looks wrong to me. Could somebody look over the pseudocode and check if it's actually correct? 115.188.233.168 (talk) 08:46, 16 July 2012 (UTC)[reply]

Now being in a position to answer my own question, I confirm that it is incorrect. I fixed it and tried to make the pseudocode a little better. Please fix if I made any errors or said anything stupid. 115.188.233.168 (talk) 11:07, 24 September 2012 (UTC)[reply]

pseudocode?![edit]

I gotta say that "pseudocode" is the most C++ish I've ever seen, complete with references, deferencers and other C++ syntax quirks. Why call it pseudocode when it's perfectly legal C++? just lacking a few method definitions... 201.47.188.2 (talk) 20:54, 20 November 2007 (UTC)[reply]

I noticed that, too. I changed it a bit, took out some of the pointless C++ syntax and left in some useful C++ syntax. —Preceding unsigned comment added by 69.140.74.18 (talk) 02:14, 6 January 2009 (UTC)[reply]

Incorrect formula?[edit]

I think there shouldn't be multiplication by "cost". If we sample from probability density measured with respect to projected solid angle, there should be no cos(theta) in the integrand, if we sample from probability density measured with respect to solid angle, there should be cos(theta) in "scale", which will cancel out with cos(theta) in the integrant. And i guess it should be made clear what density do we use when we calculate "RandomUnitVectorInHemisphere". —Preceding unsigned comment added by Iceglow (talkcontribs) 10:08, 12 May 2009 (UTC)[reply]

Three years later, the issue has been addressed. Although I may have gotten the cosines wrong - please recheck. I assumed the RandomUnitVectorInHemisphere was a naive, not cosine-weighted distribution (= naive sampling, PDF of 1). This way the cosine term stays, and I also added a few notes about sampling schemes and energy conservation below the pseudocode. 115.188.233.168 (talk) 11:09, 24 September 2012 (UTC)[reply]

Clarification[edit]

The way the algorithm is described it seem's like path tracing is just ray tracing but with infinite recursion depth. It might be usefull to state this if it is the case and if not perhaps provide a description of how path tracing differs from ray tracing.

-- Well actually, I will contend that path tracing is an algorithm that is totally alien to ray-tracing. And that it is not something that is simply added on top of a ray tracer. Miloserdia (talk) 10:19, 21 May 2012 (UTC)[reply]

Full rewrite?[edit]

I would like to take this article on as the main author and completely rewrite most of it. If no one on wikipedia gives me a red light on this in a few day I will go ahead and do it. Here are some highlights I would like to expand upon in the article.

  • Here is my definition: "Path Tracing is a computer graphics method for rendering three-dimensional images that are indistinguishable from photographs." Anyone have a problem with that?
  • In various contexts in which the word "unbiased" is used to describe a renderer, that invariably indicates that Path Tracing was used in that software.
  • Kajiya's rendering equation can be solved by two different algorithms. (1) Shooting rays from the light sources and creating paths in the scene. The path is cut off at a random number of bouncing steps and the resulting light is sent through the projected pixel on the output image. During rendering, billions of paths are created, and the output image is the average of every pixel that received some contribution. It is not the sum of these contributions, but their direct average. (2) Gathering rays from a point on a surface. A ray is projected from the surface to the scene in a bouncing path that terminates when a light source is intersected. The light is then sent backwards through the path and out the pixel on the output image. The creation of a single path is called a "sample". For a single point on a surface, approximately 800 samples (up to as many as 3 thousand samples) are taken. The final output of the pixel is the average of all these samples, not the sum!
  • Path Tracing is not a variation of Ray Tracing, nor is it a generalization of that earlier rendering method. In path tracing every instance of a bounce in the light path, the illuminance contribution is taken through a function called a BRDF -- a bidirectional reflectance distribution function. These functions relate the incoming angle of light to its outgoing angle. In orthodox Ray Tracing, the algorithm merely casts a shadow ray to the light source and then "shades" the pixel. Ray tracing does not sample an integral of incoming illuminance, nor does Ray Tracing use averages, nor does it contain concepts of convergence.
  • High Dynamic Range. Unlike Ray Tracing, or Scanline Graphics, Path Tracing must invariably be performed with colors in the High Dynamic Range. This means the resulting raw output must be taken through some manner of Tone Mapping before an actual pixel value is output to a monitor. Path Tracing in a naked form would output an image file in HDR format.
  • It was known that the two algorithms above, "Shooting" and "Gathering" respectfully, were capable of producing a numerical solution to Kajiya's rendering equation. Later developments in Path Tracing algorithms were motivated by the fact that those algorithms are so slow as to be infeasible to calculate on a desktop computer.
  • Bidirectional Path Tracing. BPT was created explicitly to obtain faster convergence of the integral. A shooting path and a gathering path are traced independently, and then the head of the shooting path is connected to the tail of the gathering path. The light is then projected through every bounce and back out into the pixel. This technique at first seems paradoxically slower, since for every gathering sample we additional trace a whole shooting path. In practice however, the extra speed of convergence far outweighs any performance loss from the extra ray casts on the shooting side.
  • Importance Sampling. The central performance bottleneck in Path Tracing is the complex geometrical calculation of casting a ray. Importance Sampling is a technique which is motivated to cast less rays through the scene while still converging correctly to the integral of incoming illuminance on the surface point. This is done by casting more rays in directions in which the illuminance would have been greater anyway. If the density of rays cast in certain direction matches the strength of contributions in those directions, the result is identical, but far less rays were actually cast. Importance Sampling is used to match Lambert's Cosine law, and can also be used to match BRDFs.
  • Metropolis Light Transport. This algorithm was created in order to get faster convergence in scenes in which the light must pass through oddly-shaped or small holes in order to reach the part of the scene that the camera is viewing. It is also shown promise on correctly rendering odd/complicated caustics. Instead of generating random paths, new sampling paths are created as slight mutations of existing paths. In this sense, the algorithm "remembers" the rare paths that connect light sources to the camera.

Miloserdia (talk) 11:33, 21 May 2012 (UTC)[reply]

Path tracing will be ported to DX12 and Vulkan...[edit]

"In 2015, path tracing will be ported to DirectX 12 and Vulkan API." ...by whom? In what context? Citation needed! SteveBaker (talk) 15:31, 11 September 2015 (UTC)[reply]

Wrong BRDF calculation in pseudocode?[edit]

Why is there a multiplication by 2 when calculating the BRDF in the pseudocode? According to the [Lambertian reflectance] article, shouldn't it be a simple reflectance * cos_theta?