diff --git a/subjects/rt/README.md b/subjects/rt/README.md index c0c165f5..5dcc6159 100644 --- a/subjects/rt/README.md +++ b/subjects/rt/README.md @@ -2,12 +2,6 @@ There are two ways to render a 3d scene into a 2d image: `rasterization` that basically converts the shapes and geometric to pixels and applies calculations to obtain the color, shadows, refraction, etc... of those pixels. The other method is called `ray tracing` and consists in drawing each pixel with already its color, shadow, refraction, reflection, etc.... -Here we can see a visual difference between each method: - -![image.png](raytrace.png) - -![image.png](raytrace2.png) - Imagine a camera pointing at a scene, and from that camera it is coming a bunch of rays that bounce from object to object until it reaches the light source (lamp, sun, etc...). That is basically how a ray tracer works. In `ray tracing` each of these rays can be seen as a pixel in the image captured by the camera, and recursively the ray tracer will calculate from where the light comes from in that pixel, being able to give that pixel a color with some shadow aspect, some refraction aspect, and so on. diff --git a/subjects/rt/raytrace.png b/subjects/rt/raytrace.png deleted file mode 100644 index 60d97113..00000000 Binary files a/subjects/rt/raytrace.png and /dev/null differ diff --git a/subjects/rt/raytrace2.png b/subjects/rt/raytrace2.png deleted file mode 100644 index dd2b4f52..00000000 Binary files a/subjects/rt/raytrace2.png and /dev/null differ