diff --git a/subjects/rt/README.md b/subjects/rt/README.md index 027abf8b..5701b7de 100644 --- a/subjects/rt/README.md +++ b/subjects/rt/README.md @@ -2,6 +2,12 @@ There are two ways to render a 3d scene into a 2d image: `rasterization` that basically converts the shapes and geometric to pixels and applies calculations to obtain the color, shadows, refraction, etc... of those pixels. The other method is called `ray tracing` and consists in drawing each pixel with already its color, shadow, refraction, reflection, etc.... +Here we can see a visual difference between each method: + +![image.png](raytrace.png) + +![image.png](raytrace2.png) + Imagine a camera pointing at a scene, and from that camera it is coming a bunch of rays tha bounce from object to object until it reaches the light source (lamp, sun, etc...). That is basically how a ray tracer works. In `ray tracing` each of these rays can be seen as a pixel in the image captured by the camera, and recursively the ray tracer will calculate from where the light comes from in that pixel, being able to give that pixel a color with some shadow aspect, some refraction aspect, and so on. diff --git a/subjects/rt/raytrace.png b/subjects/rt/raytrace.png new file mode 100644 index 00000000..60d97113 Binary files /dev/null and b/subjects/rt/raytrace.png differ diff --git a/subjects/rt/raytrace2.png b/subjects/rt/raytrace2.png new file mode 100644 index 00000000..dd2b4f52 Binary files /dev/null and b/subjects/rt/raytrace2.png differ