Phong Illumination Model

Course: CS300

Instructor: Dr. Pushpak Karnick

Type: Solo project

Language: C++


This phong illumination introduces algorithms that are essential to creating photorealistic images in interactive simulations. Topics covered include an overview of modern GPU (graphics processor unit) architecture and the common graphics APIs used, including OpenGL. Rendering techniques covered include texturing, illumination models, transparency, shading algorithms, mapping techniques (bump mapping, environment/reflection mapping, etc.), and shadows. I could learn how to implement all algorithms by using vertex and pixel shaders

Phong Illumination Model

Vertex and fragment shaders is made for implement Phong illumination model with support for point, spot and directional light source type. We will implement the model using three different shader programs


Phong Lighting - Lighting computations are done in the vertex shader, and only the final color is interpolated to the fragments. This is the implementation of the OpenGL fixed-function pipeline.


Phong Shading - This is the implementation of the model where the lighting computations are implemented in the fragment shader.


Blinn Shading - The variation to the Phong Shading (shader 2 above) where we do not have to calculate the reflection vector (expensive part of the computation). Instead, we use the half-vector between the light vector (L) and view vector (V), and combine it with the surface normal for the specular computation.

Scene & Light Setup

- Color the light sphere with the diffuse color of the light source

- Point Lights

- Direction Light

- Spotlights

Defferred + ForwardShading

G-Buffer is a storage area where we store the scene geometric attributes. Due to the amount of data to be stored, it is composed of multiple render targets. Position, Normal, Material parameter(ambient, diffuse, specular, emissive, shininess ) will be store.

Deffered Shading

- Using G-buffer

- Lighting pass setup for FSQ


Forward Shading

- Rendering light spheres using forward rendering


Toggle depth copying

Dynamic Reflection and Refraction with Environment Mapping

The goal is to render object as if it is perfectly reflective, so that the colors on the object’s surface are those reflected to the eye from its surrounding.

I constructed the texture maps for cube mapping algorithm using FBO. Implemented my own reflect/refract function and cube mapping function. Created the sky box. The combination was used the Fresnel approximation

Environment Map

General process in environment mapping:

  ▫ Load (or generate) the image(s) that represent the environment.

  ▫ For each fragment on the reflective object:

        Calculate the viewing vector, V

        Calculate the normal vector, N

        Use V and N to calculate the reflection vector

        Transform the reflection vector to texture coordinate (E.g. Cube Mapping)

        Use the texture coordinate to get the texel in the environment map 

From Cube-mapping to Environment Mapping

    Use a cube map texture (6 x 2D Textures) to represent the environment

           Projector Function: Cube Mapping

    Calculate the reflection vector from the eye

          Texture Entity: Reflection Vector from eye

    Use cube mapping lookup on the reflection vector to get the texel value

    Process to generate the maps:

          Position the camera at the reflective object position.

          Set the FOV to 90 degree and aspect ratio to 1.

          For each map:

                  Orient the camera to point along the associated axis.

                  Render the scene

Learning Outcomes

After the completion of the course, I was able to implement the Phong lighting, normal mapping, environment mapping (for reflection and refraction) and similar techniques using shaders and will have a good understanding of the computer graphics pipeline as implemented in modern graphics hardware.

© 2020 By Wonjae Jung