ASU Learning Sparks

Converting 3D to 2D: Graphic Rendering Explained

How do you turn a 3D virtual scene into a 2D image? This is through a process called "rendering". Graphic rendering is incredibly complex. It involves disaggregating and retaining all of the scene's components from depth to texture to light. Retaining these qualities is essential to mimicking real-world quality. Turning a 3D virtual scene into a 2D image is a process called ...

How do you turn a 3D virtual scene into a 2D image? This is through a process called "rendering". Graphic rendering is incredibly complex. It involves disaggregating and retaining all of the scene's components from depth to texture to light. Retaining these qualities is essential to mimicking real-world quality.

Turning a 3D virtual scene into a 2D image is a process called “rendering”. 

Virtual camera frustums tell the game engine what parts of the scene it needs to render, especially which objects are in view of the camera.  

Performing the act of rendering is really about understanding how the virtual light bounces off of virtual objects and their materials before reaching the virtual camera.

Light sources can be placed as broad directional lights that uniformly cast light at the same angle across the scene. Or light sources can be placed as objects inside of the scene, emitting as point light sources (radiating in all directions) or spot lights (shining as a cone)  or other geometries of light. The lights can be colored and the intensity of the light is programmable. The game engines assume that for the point light sources or spot lights, the light gets dimmer over longer distances. 

These lights shine down on the 3D objects in the scene, bouncing off of the materials and casting shadows, based on the relative position and direction of the lighting. 

This requires the game engine to have a good understanding of the geometry of the different virtual objects, especially their shape and the direction the surface is pointed in. The geometry for the surface of an object is called a mesh. Meshes include the position of all of the vertex positions of an object, all of its corners. From those corners, meshes can define surface triangles that are combinations of three vertices. These triangles form the boundary of the object. Each vertex also declares which way the surface is facing; this is called a surface normal, pointing straight out of the surface at a perpendicular angle.  

Draped on top of the 3D shape of a mesh is a “material”, which you can think of as a fabric that stretches over the surface of the mesh. The material is defined by a shader: a GPU program that tells the renderer how to visually represent the scene at each pixel on the user’s screen. The shader usually employs a reflection model, assuming that the color and intensity of the light is some function of the light that hits a surface, modified by the color of the surface (the “albedo”), the angle at which the light hits the surface, and the angle at which the camera is viewing the surface. Based on the shininess of the material, the lights may either reflect off of the object like a mirror or be diffused, like paper, or somewhere in between, like a polished car.

The materials prepare all the relevant parameters to the shader, including configurations for reflectivity, albedo, as well as other properties like emissive behavior. These parameters can also be textures, allowing the material to use images as patterns for rich textured surfaces.

Altogether the virtual lights bounce off of the virtual materials on the virtual objects and into the virtual camera to establish a rich rendering pipeline.