For some reason you are here, eager to reinvent the wheel.
Hungry for the knowledge to bring change.
Hi, I see you. I am you.
The engine we will be building today is written in C# and so is compiled to an executable.
We could as well write this in Javascript and render on a canvas element or render on a blazor component.
Design Architecture is a blur.
Where to begin, I wonder the level of expertise you have as you read this.
Do I blast through the steps or I walk though them ?
Can I run through verbose interpretations or is the hand holding approach the way to go ?
My goal and the essence of this article is to share the necessary information to enable any sufficiently motivated individual write, test and deploy their own implementation of a v1 software engine.
Walk with me, I will make you fishers of mesh.
Table of Content
What is a 3D software engine
Why create a 3D software engine
README
Writing the core logic
Wireframing 101.1
How to load a mesh
Rasterization
Performance
Light
Textures
What is a 3D software engine
Real talk? A 3D software engine is a game engine. A system used for simulating computer graphics.
Basically we will be building the foundational aspect of many of the rendering software you all love and know.
For a more traditional response:
A 3D software engine, often referred to as a 3D engine or a game engine, is a software framework or toolset designed to facilitate the creation and rendering of three-dimensional graphics in real-time applications, such as video games, simulations, virtual reality experiences, and more. These engines are crucial for developers who want to create interactive 3D environments without having to build everything from scratch.
A full fledged and robust engine has several key components like graphics rendering, physics simulation, audio support, scene management, scripting and programming and many more.
We all know and love and hate our respective engines like Godot, Unreal and Unity.
We will be building out a completely robust graphics rendering application
Why create a 3D software engine
Now that we know the scope of this project you might be tempted to ask; what the fuck ? what the heck ? We could use our GPU to handle the computation and use DirectX or webgl/opengl. What gives ?
What gives is that this is for educational purposes only and digesting the information shared coupled with the knowledge you will have gained while making this project will help you in the long run for whatever reason you are interested in 3D computer graphics (at least that's what I tell myself at night)
README
Lasciate ogne speranza, voi ch’intrate. - Dante Alighieri
If you feel your grasp of calculus is not at least average now would be the time to close this webpage.
I will not presume to discourage you.
Computer graphics has a lot to do with simulating points in space and performing matrix operations on vector points.
The vectors may have both magnitude and direction.
Most of what you will be doing are matrix transformations
Study the image below closely.
Let's start from
Object Coordinates:
This is the local coordinate system of objects and it's initial position and orientation of objects before any transform is applied.
MODELVIEW matrix:
This is a combination of Model and View matrices (Mview * Mmodel). A model matrix is a transformation matrix used to represent the position, orientation, and scale of an object in a 3D scene. It is one of the fundamental matrices used in the transformation pipeline to move objects from their local coordinate space (i.e., their own object space) into the world coordinate space, which is the common coordinate system shared by all objects in the scene.
The model matrix performs three main transformations on an object:
Translation, Rotation and Scaling.
A view matrix, sometimes called a camera matrix is a transformation matrix that defines the position and orientation of a virtual camera within a 3D scene. The primary purpose of the view matrix is to transform objects and points in world space into the camera's local coordinate system, often referred to as "view space" or "camera space." This is how the computer simulates a camera or viewport looking at any object in space. (Model matrix)
Basically, Model transform is to convert from object space to world space and the View transform is to convert from world space to eye space.
Eye Coordinates:
This is yielded by multiplying the MODELVIEW matrix and object coordinates.
PROJECTION matrix;
This defines how wide or narrow a viewport is; how the vertex data are projected onto the screen (perspective or orthogonal).
This put the object into perspective; is what I'm viewing close by or in a distance?
Clip Coordinates:
We get the clip coordinate by multiplying the PROJECTION matrix with the eye coordinates.
The reason it is called clip coordinates is that the transformed vertex (x, y, z) is clipped by comparing with ±w.
Normalized Device Coordinates (NDC)
It is yielded by dividing the clip coordinates by w. It is called perspective division. It is more like window (screen) coordinates, but has not been translated and scaled to screen pixels yet. The range of values is now normalized from -1 to 1 in all 3 axes.
Finally,
Window Coordinates (Screen Coordinates):
we multiply the normalized device coordinates by the VIEWPORT transform.
The viewport transform formula is simply acquired by the linear relationship between NDC and the window coordinates;
This is a high-level overview into the mathematics of building a scene.
you really need to understand that there is a series of transformations done that way:
– we start by a 3D object centered on itself
– the same object is then moved into the virtual 3D world by translation, scaling or rotation operations via matrices
– a camera will look at this 3D object positioned in the 3D world
– the final projection of all that will be done into a 2D space which is your screen
All this magic is done by cumulating transformations through matrices operations.
My hope is that with your previous idea and this brief run through you would be better equipped at understanding and following up.
Writing the core logic
Let's get to the fun part, for the software requirements we would need visual studio and the .NET SDK
we would also need two libraries: SharpDX for vector calculations and NewtonSoft.json for parsing .JSON data in our app.
Note: we can also use the native System.Numerics but I want this to be as unbias and framework independent as possible. (Whatever that means).
Go to Visual studio, create a new universal windows project and name it cengine
, go to your Nuget manager and install SharpDX core assembly, SharpDX.Mathematics and NewtonSoftJson.
Create a class for your Camera, this will house your camera logic. Name it: Camera.cs
Our Camera will have 2 properties: its position in the 3D world and where it’s looking at, the target. Both are made of 3D coordinates named Vector3.
using SharpDX;
namespace cengine
{
public class Camera
{
public Vector3 Position { get; set; }
public Vector3 Target { get; set; }
}
}
Create a class for your Object, Our Mesh will have a collection of vertices (several vertex or 3D points) that will be used to build our 3D object, its position in the 3D world and its rotation state. Name it: Mesh.cs
using SharpDX;
namespace cengine
{
public class Mesh
{
public string Name { get; set; }
public Vector3[] Vertices { get; private set; }
public Vector3 Position { get; set; }
public Vector3 Rotation { get; set; }
public Mesh(string name, int verticesCount)
{
Vertices = new Vector3[verticesCount];
Name = name;
}
}
}
There is one more class we need, a class to hold the core engine logic/calculations.
In it’s rendering function, we will build the view matrix and the projection matrix based on the camera we have defined before.
Then, we will iterate through each available mesh to build their associated world matrix based on their current rotation and translation values. Finally, once done, the final transformation matrix to apply is:
var transformMatrix = worldMatrix * viewMatrix * projectionMatrix;
Create a class for your Device Object, Using this transformation matrix, we’re going to project each vertex of each mesh in the 2D world to obtain X,Y coordinates from their X,Y,Z coordinates. To finally draw on screen, we’re adding a small clip logic to only display visible pixels via a PutPixel method. Name it: Device.cs
using Windows.UI.Xaml.Media.Imaging;
using System.Runtime.InteropServices.WindowsRuntime;
using SharpDX;
namespace cengine
{
public class Device
{
private byte[] backBuffer;
private WriteableBitmap bmp;
public Device(WriteableBitmap bmp)
{
this.bmp = bmp;
// the back buffer size is equal to the number of pixels to draw
// on screen (width*height) * 4 (R,G,B & Alpha values).
backBuffer = new byte[bmp.PixelWidth * bmp.PixelHeight * 4];
}
// This method is called to clear the back buffer with a specific color
public void Clear(byte r, byte g, byte b, byte a) {
for (var index = 0; index < backBuffer.Length; index += 4)
{
// BGRA is used by Windows instead by RGBA in HTML5
backBuffer[index] = b;
backBuffer[index + 1] = g;
backBuffer[index + 2] = r;
backBuffer[index + 3] = a;
}
}
// Once everything is ready, we can flush the back buffer
// into the front buffer.
public void Present()
{
using (var stream = bmp.PixelBuffer.AsStream())
{
// writing our byte[] back buffer into our WriteableBitmap stream
stream.Write(backBuffer, 0, backBuffer.Length);
}
// request a redraw of the entire bitmap
bmp.Invalidate();
}
// Called to put a pixel on screen at a specific X,Y coordinates
public void PutPixel(int x, int y, Color4 color)
{
// As we have a 1-D Array for our back buffer
// we need to know the equivalent cell in 1-D based
// on the 2D coordinates on screen
var index = (x + y * bmp.PixelWidth) * 4;
backBuffer[index] = (byte)(color.Blue * 255);
backBuffer[index + 1] = (byte)(color.Green * 255);
backBuffer[index + 2] = (byte)(color.Red * 255);
backBuffer[index + 3] = (byte)(color.Alpha * 255);
}
// Project takes some 3D coordinates and transform them
// in 2D coordinates using the transformation matrix
public Vector2 Project(Vector3 coord, Matrix transMat)
{
// transforming the coordinates
var point = Vector3.TransformCoordinate(coord, transMat);
// The transformed coordinates will be based on coordinate system
// starting on the center of the screen. But drawing on screen normally starts
// from top left. We then need to transform them again to have x:0, y:0 on top left.
var x = point.X * bmp.PixelWidth + bmp.PixelWidth / 2.0f;
var y = -point.Y * bmp.PixelHeight + bmp.PixelHeight / 2.0f;
return (new Vector2(x, y));
}
// DrawPoint calls PutPixel but does the clipping operation before
public void DrawPoint(Vector2 point)
{
// Clipping what's visible on screen
if (point.X >= 0 && point.Y >= 0 && point.X < bmp.PixelWidth && point.Y < bmp.PixelHeight)
{
// Drawing a yellow point
PutPixel((int)point.X, (int)point.Y, new Color4(1.0f, 1.0f, 0.0f, 1.0f));
}
}
// The main method of the engine that re-compute each vertex projection
// during each frame
public void Render(Camera camera, params Mesh[] meshes)
{
// To understand this part, please read the prerequisites resources
var viewMatrix = Matrix.LookAtLH(camera.Position, camera.Target, Vector3.UnitY);
var projectionMatrix = Matrix.PerspectiveFovRH(0.78f,
(float)bmp.PixelWidth / bmp.PixelHeight,
0.01f, 1.0f);
foreach (Mesh mesh in meshes)
{
// Beware to apply rotation before translation
var worldMatrix = Matrix.RotationYawPitchRoll(mesh.Rotation.Y,
mesh.Rotation.X, mesh.Rotation.Z) *
Matrix.Translation(mesh.Position);
var transformMatrix = worldMatrix * viewMatrix * projectionMatrix;
foreach (var vertex in mesh.Vertices)
{
// First, we project the 3D coordinates into the 2D space
var point = Project(vertex, transformMatrix);
// Then we can draw on screen
DrawPoint(point);
}
}
}
public MainPage()
{
this.InitializeComponent();
}
/// <summary>
/// Invoked when this page is about to be displayed in a
Frame.
/// </summary>
/// <param name="e">Event data that describes how this page
was reached. The Parameter
/// property is typically used to configure the page.
</param>
protected override void OnNavigatedTo(NavigationEventArgs
e)
{
}
}
}
Putting it together
We finally need to create a mesh (our cube), create a camera and target our mesh & instantiate our Device object.
This is the part that has to do with rendering our output on the screen.
During each call to the handler registered to the rendering loop, we will launch the following logic every time:
– Clear the screen and all associated pixels with black ones (Clear() function)
– Update the various position & rotation values of our meshes
– Render them into the back buffer by doing the required matrix operations (Render() function)
– Display them on screen by flushing the back buffer data into the front buffer (Present() function)
If your XAML knowledge is shaky at best no worries, I got you.
Go to your MainPage.xaml.cs
file and paste this
using SharpDX;
// The Blank Page item template is documented at http://go.microsoft.com/fwlink/?LinkId=234238
namespace cengine
{
/// <summary>
/// An empty page that can be used on its own or navigated to within a Frame.
/// </summary>
public sealed partial class MainPage : Page
{
private Device device;
Mesh mesh = new Mesh("Cube", 8);
Camera mera = new Camera();
private void Page_Loaded(object sender, RoutedEventArgs e)
{
// Choose the back buffer resolution here
WriteableBitmap bmp = new WriteableBitmap(640, 480);
device = new Device(bmp);
// Our XAML Image control
frontBuffer.Source = bmp;
mesh.Vertices[0] = new Vector3(-1, 1, 1);
mesh.Vertices[1] = new Vector3(1, 1, 1);
mesh.Vertices[2] = new Vector3(-1, -1, 1);
mesh.Vertices[3] = new Vector3(-1, -1, -1);
mesh.Vertices[4] = new Vector3(-1, 1, -1);
mesh.Vertices[5] = new Vector3(1, 1, -1);
mesh.Vertices[6] = new Vector3(1, -1, 1);
mesh.Vertices[7] = new Vector3(1, -1, -1);
mera.Position = new Vector3(0, 0, 10.0f);
mera.Target = Vector3.Zero;
// Registering to the XAML rendering loop
CompositionTarget.Rendering += CompositionTarget_Rendering;
}
// Rendering loop handler
void CompositionTarget_Rendering(object sender, object e)
{
device.Clear(0, 0, 0, 255);
// rotating slightly the cube during each frame rendered
mesh.Rotation = new Vector3(mesh.Rotation.X + 0.01f, mesh.Rotation.Y + 0.01f, mesh.Rotation.Z);
// Doing the various matrix operations
device.Render(mera, mesh);
// Flushing the back buffer into the front buffer
device.Present();
}
}
}
Now go to your MainPage.xaml file and add an image control with the name frontBuffer
Do this by adding this code block into the grid
<Image x:Name="frontBuffer"/>
If you observe in the code behind file Mainpage.xaml.cs
you will see that Our XAML Image control is named frontBuffer
.
For those who did not abandon all hope upon entry, If you’ve managed to follow properly tutorial up to this point, you should obtain something like this:
Wireframing 101.1
Here we’re going to learn how to draw lines between each vertex & the concept of faces/triangles.
Define the code for a Face object. It’s a very simple object as this is just a set of 3 indexes.
Create a class for your Face Object, A face is simply a structure containing 3 values which are indexes pointing to the proper vertices array of the mesh to be rendered. Name it: Face.cs
Let’s start by coding a simple algorithm. To draw a line between 2 vertices, we’re going to use the following logic:
– if the distance between the 2 points (point0 & point1) is less than 2 pixels, there’s nothing to do
– otherwise, we’re finding the middle point between both points (point0 coordinates + (point1 coordinates – point0 coordinates) / 2)
– we’re drawing that point on screen
– we’re launching this algorithm recursively between point0 & middle point and between middle point & point1
Here is the code to do that:
Add this method in your Device.cs
public void DrawLine(Vector2 point0, Vector2 point1)
{
var dist = (point1 - point0).Length();
// If the distance between the 2 points is less than 2 pixels
// We're exiting
if (dist < 2)
return;
// Find the middle point between first & second point
Vector2 middlePoint = point0 + (point1 - point0)/2;
// We draw this point on screen
DrawPoint(middlePoint);
// Recursive algorithm launched between first & middle point
// and between middle & second point
DrawLine(point0, middlePoint);
DrawLine(middlePoint, point1);
}
You need to update the rendering loop to use this new piece of code:
for (var i = 0; i < mesh.Vertices.Length - 1; i++)
{
var point0 = Project(mesh.Vertices[i], transformMatrix);
var point1 = Project(mesh.Vertices[i + 1], transformMatrix);
DrawLine(point0, point1);
}
Run your code and see what you get.
Make sure your code is working without any errors at this point.
namespace cengine
{
public struct Face
{
public int A;
public int B;
public int C;
}
}
Refactor your Mesh class to also use the Face definition
public class Mesh
{
public string Name { get; set; }
public Vector3[] Vertices { get; private set; }
public Face[] Faces { get; set; }
public Vector3 Position { get; set; }
public Vector3 Rotation { get; set; }
public Mesh(string name, int verticesCount, int facesCount)
{
Vertices = new Vector3[verticesCount];
Faces = new Face[facesCount];
Name = name;
}
}
We now need to update our Render() function of our Device class to iterate through all the faces defined and to draw the triangles associated.
foreach (var face in mesh.Faces)
{
var vertexA = mesh.Vertices[face.A];
var vertexB = mesh.Vertices[face.B];
var vertexC = mesh.Vertices[face.C];
var pixelA = Project(vertexA, transformMatrix);
var pixelB = Project(vertexB, transformMatrix);
var pixelC = Project(vertexC, transformMatrix);
DrawLine(pixelA, pixelB);
DrawLine(pixelB, pixelC);
DrawLine(pixelC, pixelA);
}
Finally, we need to declare the mesh associated with our Cube properly with its 12 faces to make this new code working as expected.
In your MainPage.xaml.cs
change the following code and refactor the file to use it
var mesh = new cengine.Mesh("Cube", 8, 12);
meshes.Add(mesh);
mesh.Vertices[0] = new Vector3(-1, 1, 1);
mesh.Vertices[1] = new Vector3(1, 1, 1);
mesh.Vertices[2] = new Vector3(-1, -1, 1);
mesh.Vertices[3] = new Vector3(1, -1, 1);
mesh.Vertices[4] = new Vector3(-1, 1, -1);
mesh.Vertices[5] = new Vector3(1, 1, -1);
mesh.Vertices[6] = new Vector3(1, -1, -1);
mesh.Vertices[7] = new Vector3(-1, -1, -1);
mesh.Faces[0] = new Face { A = 0, B = 1, C = 2 };
mesh.Faces[1] = new Face { A = 1, B = 2, C = 3 };
mesh.Faces[2] = new Face { A = 1, B = 3, C = 6 };
mesh.Faces[3] = new Face { A = 1, B = 5, C = 6 };
mesh.Faces[4] = new Face { A = 0, B = 1, C = 4 };
mesh.Faces[5] = new Face { A = 1, B = 4, C = 5 };
mesh.Faces[6] = new Face { A = 2, B = 3, C = 7 };
mesh.Faces[7] = new Face { A = 3, B = 6, C = 7 };
mesh.Faces[8] = new Face { A = 0, B = 2, C = 7 };
mesh.Faces[9] = new Face { A = 0, B = 4, C = 7 };
mesh.Faces[10] = new Face { A = 4, B = 5, C = 6 };
mesh.Faces[11] = new Face { A = 4, B = 6, C = 7 };
You should now have this beautiful rotating cube:
There is an optimized way to draw our lines using the Bresenham’s line algorithm. It’s faster & sharper than our current simple recursive version.
public void DrawBline(Vector2 point0, Vector2 point1)
{
int x0 = (int)point0.X;
int y0 = (int)point0.Y;
int x1 = (int)point1.X;
int y1 = (int)point1.Y;
var dx = Math.Abs(x1 - x0);
var dy = Math.Abs(y1 - y0);
var sx = (x0 < x1) ? 1 : -1;
var sy = (y0 < y1) ? 1 : -1;
var err = dx - dy;
while (true) {
DrawPoint(new Vector2(x0, y0));
if ((x0 == x1) && (y0 == y1)) break;
var e2 = 2 * err;
if (e2 > -dy) { err -= dy; x0 += sx; }
if (e2 < dx) { err += dx; y0 += sy; }
}
}
In the render function, replace the call to DrawLine by DrawBline and you should notice that this is a bit more fluid & a bit more sharp
How to load a mesh
For expediency I will refrain from getting too technical anymore and will share the link to my working sample at the end of the tutorial.
3D modelers help the collaboration between 3D designers and developers. A designer can use her favorite tools to build scenes or meshes (3D Studio Max, Maya, Blender, etc.). Then she will export her work into a file that will be loaded by the developers. The developers will finally push the meshes into his real time 3D engine. There are several file formats available on the market to serialize the job done by the artists. In our case, we’re going to use JSON.
I'll give you a link to download the mesh we will be using today.
If you're already conversant with 3D design you can use any mesh of your choice, otherwise download this
Note: this monkey is named Suzanne and is very well-known in the 3D/gaming community. By knowing her, you’re now a proud member of this cool community! Welcome!
you will also need to change the properties of the file you will include in the solution. Switch “Build Action” to “Content” and always copy to the output directory:
Otherwise, the file won’t be found.
In this part we add logic to the device class to load the mesh asynchronously
Finally, we now need to update our main function to call this new function instead of creating manually our cube. As we will also have potentially several meshes to animate, we need also to change the rotation values during each tick to every meshes loaded:
meshes = await device.LoadJSONFileAsync("monkey.babylon");
Rasterization
Rasterization is the process of taking geometric primitives (such as triangles or polygons) defined in 3D space and converting them into 2D pixels or fragments on the screen. This process involves several steps:
Vertex Transformation: First, the 3D coordinates of vertices in an object or scene are transformed using a series of matrices (including the model, view, and projection matrices) to place them in screen space.
Clipping: After transformation, primitives that are outside the view frustum (the portion of 3D space that will be visible in the final image) are clipped or removed from further processing.
Primitive Assembly: Triangles (or other primitives) are assembled from the transformed vertices, and additional data like texture coordinates and colors are interpolated across the surface of each primitive.
Rasterization: The assembled primitives are then rasterized, meaning they are divided into fragments or pixels that correspond to the 2D image plane. This process determines which fragments are covered by the primitive and generates a list of fragments to be shaded.
Fragment Shading: Finally, each fragment undergoes shading computations, which determine its color and other attributes based on lighting, textures, and other rendering effects.
Rasterization is a crucial step in real-time rendering because it allows complex 3D scenes to be efficiently displayed on a 2D screen. It's a process that takes advantage of the discrete nature of digital screens, which are made up of individual pixels.
Z-buffer (Depth Buffer):
The Z-buffer, also known as the depth buffer, is a critical component in the rasterization pipeline. It is a 2D array (often with the same dimensions as the screen) that stores depth values (Z-values) for each pixel or fragment in the image. The Z-value represents the distance of the pixel or fragment from the viewer's perspective.
The Z-buffer is used to address the "hidden surface problem" in computer graphics, which is determining which objects or fragments are visible and which are obscured by others. During the rasterization process, for each pixel being processed, the Z-buffer is checked. If the new fragment being rasterized is closer to the viewer than the fragment already stored in the Z-buffer, the new fragment's color and depth value replace the old values in the buffer. If the new fragment is farther away, it is discarded.
This Z-buffering technique ensures that objects are rendered in the correct order, from front to back, creating the illusion of a 3D scene. It is crucial for handling occlusion, transparency, and correctly rendering complex scenes with overlapping objects.
Performance
we’ve learned how to fill our triangles. As we’re CPU based with our 3D software engine, it really starts to cost a lot of CPU time. The good news is that today’s CPUs are multi-cores. We could then imagine using parallelism to boost the performance. We’re also going to see some simple tips that could boost the performance in such rendering loop code. Indeed, we’re going to move from 5 FPS to 50 FPS, a 10X performance boost!
First step is to compute the FPS (Frame Per second) to be able to check if we’re going to gain some performance by modifying our algorithm.
We need to know the delta time between two frames rendered. We then simply need to capture the current time, draw a new frame
(CompositionTarget.Rendering in XAML), capture again the current time and compare it to the previous time saved. You will have a result in milliseconds. To obtain, the FPS, simply divide 1000 by this result. For instance if it’s 16,66 ms, the optimal delta time, you will have 60 FPS.
In conclusion, add a new TextBlock XAML control, named it “fps” and use this code to compute the FPS:
DateTime previousDate;
void CompositionTarget_Rendering(object sender, object e)
{
// Fps
var now = DateTime.Now;
var currentFps = 1000.0 / (now - previousDate).TotalMilliseconds;
previousDate = now;
fps.Text = string.Format("{0:0.00} fps", currentFps);
// Rendering loop
device.Clear(0, 0, 0, 255);
foreach (var mesh in meshes)
{
mesh.Rotation = new Vector3(mesh.Rotation.X, mesh.Rotation.Y + 0.01f, mesh.Rotation.Z);
device.Render(mera, mesh);
}
device.Present();
}
Using this code, using the native resolution of my HP pavilion (1366 x 768), I’m running an average of 5 FPS with the C# solution shared in the previous article. My laptop embeds an Intel Core i7-3667U with a 8250U GPU. It’s a hyper-threaded dual-core CPU. It then shows 4 logical CPUs.
Light
We’re now going to discover probably the best part of the series: how to handle lightning!
The first algorithm reviewed is named the Flat Shading. It’s using per face normals. We will still see the polygons using this approach. But thanks to Gouraud Shading, we will go one step further. This one uses per vertex normals. It will then interpolate the color per pixel using 3 normals.
flat shading assigns a single color to each polygon and is computationally efficient but can produce blocky results. Gouraud shading calculates vertex colors and interpolates them across a polygon's surface to create smoother shading transitions, providing more realistic results at a moderate computational cost. The choice between these techniques depends on the desired visual style and the available computational resources.
Here we will create a new class Vertex.cs
This class will hold the information for our 3D vertex
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using SharpDX;
namespace cengine
{
public struct Vertex
{
public Vector3 Normal;
public Vector3 Coordinates;
public Vector3 WorldCoordinates;
}
}
Create another class ScanLineData.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace cengine
{
public struct ScanLineData
{
public int currentY;
public float ndotla;
public float ndotlb;
public float ndotlc;
public float ndotld;
}
}
This will generate various slight modifications to the code. First one is the way to load the JSON file exported by Blender. We now need to load the per vertex normals and build Vertex objects instead of Vector3 objects in the Vertices array
(Involves a lot of refactoring your Device.cs class to handle the changes)
Here are all the methods/functions that has been updated:
– Project() is now working on the Vertex structure and is projecting the vertices coordinates in 3D (using the World Matrix) as well as projecting the per vertex normal.
– DrawTriangle() is now getting some Vertex structures as input, compute the NDotL with the ComputeNDotL method and calls ProcessScanLine with those data
– ComputeNDotL() is computing the cosine of the angle between the normal and the light direction
– ProcessScanLine() is now varying the color using the NDotL value sent by DrawTriangle. We currently only have 1 color per triangle as we’re using Flat Shading.
If you’ve managed to understand the Flat Shading, you’ll see that the Gouraud Shading is not complex at all. This time, rather than using 1 unique normal per face, and thus a unique color per face, we’re going to use 3 normals: 1 per vertex of our triangles. We will then have 3 level of colors defined and we will interpolate the color of each pixel between each vertex using the same algorithm used in previous tutorials. Using this interpolation, we will then have a continuous lightning on our triangles.
Textures
We’re going to see how to apply a texture to a mesh by using mapping coordinates exported from Blender. If you’ve managed to understand the previous tutorials, it will just be some piece of cake to apply some textures. The main concept is once again to interpolate some data between each vertex. In the second part of this section, we’ll see how to boost the performance of our rendering algorithm. For that, we’re going to only display visible faces by using a back-face culling approach.
At the end of this tutorial, you will have this final rendering inside our CPU-based 3D software engine:
Let’s start by the Wikipedia definition:
“A texture map is applied (mapped) to the surface of a shape or polygon. This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a UV coordinate) either via explicit assignment or by procedural definition. Image sampling locations are then interpolated across the face of a polygon to produce a visual result that seems to have more richness than could otherwise be achieved with a limited number of polygons.”
Pass the Texture information in the flow
I won’t dig into every detail as you’ve got the complete source to download a bit below. Let’s rather review globally what you need to do:
– add a Texture property to the Mesh class and a Vector2 property named TextureCoordinates to the Vertex structure
– update ScanLineData to embed 8 more floats/numbers : the UV coordinates per vertex (ua, ub, uc, ud & va, vb, vc, vd).
– update the Project method/function to return a new Vertex with the TextureCoordinates passed as-is (pass through)
– pass a Texture object as the last parameter to the ProcessScanLine, DrawTriangle methods/functions
– Fill the new ScanLineData structure in drawTriangle with the appropriate UV coordinates
– Interpolate the UV in ProcessScanLine on Y to have SU/SV & EU/EV (start U/start V/End U/End V) then interpolate U, V on X, find the corresponding color with it in the texture. This color texture will be mixed with the native object’s color (always white in our tutorials case) and the light quantity measured with the NDotL operation with the normal.
Note: our Project method could be seen as what we name a “Vertex Shader” in a 3D hardware engine and our ProcessScanLine method could be seen as a “Pixel Shader”.
I’m sharing in this article only the new ProcessScanLine method which is really the main part to be updated:
void ProcessScanLine(ScanLineData data, Vertex va, Vertex vb, Vertex vc, Vertex vd, Color4 color, Texture texture)
{
Vector3 pa = va.Coordinates;
Vector3 pb = vb.Coordinates;
Vector3 pc = vc.Coordinates;
Vector3 pd = vd.Coordinates;
// Thanks to current Y, we can compute the gradient to compute others values like
// the starting X (sx) and ending X (ex) to draw between
// if pa.Y == pb.Y or pc.Y == pd.Y, gradient is forced to 1
var gradient1 = pa.Y != pb.Y ? (data.currentY - pa.Y) / (pb.Y - pa.Y) : 1;
var gradient2 = pc.Y != pd.Y ? (data.currentY - pc.Y) / (pd.Y - pc.Y) : 1;
int sx = (int)Interpolate(pa.X, pb.X, gradient1);
int ex = (int)Interpolate(pc.X, pd.X, gradient2);
// starting Z & ending Z
float z1 = Interpolate(pa.Z, pb.Z, gradient1);
float z2 = Interpolate(pc.Z, pd.Z, gradient2);
// Interpolating normals on Y
var snl = Interpolate(data.ndotla, data.ndotlb, gradient1);
var enl = Interpolate(data.ndotlc, data.ndotld, gradient2);
// Interpolating texture coordinates on Y
var su = Interpolate(data.ua, data.ub, gradient1);
var eu = Interpolate(data.uc, data.ud, gradient2);
var sv = Interpolate(data.va, data.vb, gradient1);
var ev = Interpolate(data.vc, data.vd, gradient2);
// drawing a line from left (sx) to right (ex)
for (var x = sx; x < ex; x++)
{
float gradient = (x - sx) / (float)(ex - sx);
// Interpolating Z, normal and texture coordinates on X
var z = Interpolate(z1, z2, gradient);
var ndotl = Interpolate(snl, enl, gradient);
var u = Interpolate(su, eu, gradient);
var v = Interpolate(sv, ev, gradient);
Color4 textureColor;
if (texture != null)
textureColor = texture.Map(u, v);
else
textureColor = new Color4(1, 1, 1, 1);
// changing the native color value using the cosine of the angle
// between the light vector and the normal vector
// and the texture color
DrawPoint(new Vector3(x, data.currentY, z), color * ndotl * textureColor);
}
}
To be able to have the nice rendering you’ve seen at the top of this section, you need to load a new version of Suzanne modified by me and exported from Blender with the UV coordinates. For that, please download those 2 files:
– Suzanne Blender model with UV coordinates set
– the 512×512 texture image to load with
Make sure both files are at the root directory of your project and don't forget to update their properties by Switching the “Build Action” to “Content” and always copy to the output directory
let’s have a look to the final optimization of your 3D software engine.
Back face culling
Wikipedia:
“In computer graphics, back-face culling determines whether a polygon of a graphical object is visible <…> One method of implementing back-face culling is by discarding all polygons where the dot product of their surface normal and the camera-to-polygon vector is greater than or equal to zero.”
Basically, Back face culling is a rendering optimization technique where the renderer avoids drawing polygons (usually triangles) that face away from the viewer, as they are not visible and do not contribute to the final image. This helps improve rendering efficiency by reducing the number of polygons that need to be processed and rendered.
The idea in our case is then to pre-compute each surface normal of a mesh during the JSON loading phase using the same algorithm used in the previous tutorial for flat shading. Once done, in Render method/function, we will transform the coordinates of the surface normal into the world view (the world viewed by the camera) and check its Z value. If it’s >= 0, we won’t draw the triangle at all as this means that this face is not visible from the camera view point.
This tutorial is now finished. I had a lot of pleasure writing it.
Here is the link to the Github
If you get stuck in anyway or have modifications or you notice a bug (There is a bug in my code, can you find it ?😉)
please do not hesitate to reach out.
As always, It's your boy Kimonic.
Wishing you peace and blessings.
Top comments (0)