**For an explanation about why to use tangent space, read this tidbit of text.**

*Let's assume we're using Direct3D and HLSL.*

**Converting to Tangent (or texture) space**

Normals stored in the texture are surface orientation dependent and are stored in what's called Tangent Space. But all the other lighting components such as view direction are supplied in world space. Because we can't use world space, why not convert every lighting component we need to compare the normal with, to this format called tangent space? Why not compare apples to apples?

Changing coordinate systems requires transformation. I'll just skip the hardcore math, but what I do want to explain here is that we need a matrix to transform

**world to tangent space**. Just like we need a matrix to get world space from object space, we need a matrix to convert to tangent space. Remember this:- We need the surface orientation, because that's where the texture normals depend on.
- We know everything about our surface (a triangle).
- Any lighting component we need in PS (lightdir,viewdir,surfacedir) needs to be multiplied by the resulting matrix.

/* We need 3 triangle corner positions, 3 triangle texture coordinates and a normal. Tangent and bitangent are the variables we're constructing */

// Determine surface orientation by calculating triangles edges

// Determine surface orientation by calculating triangles edges

D3DXVECTOR3 edge1 = pos2 - pos1;

D3DXVECTOR3 edge2 = pos3 - pos1;

D3DXVec3Normalize(&edge1, &edge1);

D3DXVec3Normalize(&edge2, &edge2);

// Do the same in texture space

D3DXVECTOR2 texEdge1 = tex2 - tex1;

D3DXVECTOR2 texEdge2 = tex3 - tex1;

D3DXVec2Normalize(&texEdge1, &texEdge1);

D3DXVec2Normalize(&texEdge2, &texEdge2);

// A determinant returns the orientation of the surface

float det = (texEdge1.x * texEdge2.y) - (texEdge1.y * texEdge2.x);

// Account for imprecision

D3DXVECTOR3 bitangenttest;

if(fabsf(det) < 1e-6f) {

// Equal to zero (almost) means the surface lies flat on its back

tangent.x = 1.0f;

tangent.y = 0.0f;

tangent.z = 0.0f;

bitangenttest.x = 0.0f;

bitangenttest.y = 0.0f;

bitangenttest.z = 1.0f;

} else {

det = 1.0f / det;

tangent.x = (texEdge2.y * edge1.x - texEdge1.y * edge2.x) * det;

tangent.y = (texEdge2.y * edge1.y - texEdge1.y * edge2.y) * det;

tangent.z = (texEdge2.y * edge1.z - texEdge1.y * edge2.z) * det;

bitangenttest.x = (-texEdge2.x * edge1.x + texEdge1.x * edge2.x) * det;

bitangenttest.y = (-texEdge2.x * edge1.y + texEdge1.x * edge2.y) * det;

bitangenttest.z = (-texEdge2.x * edge1.z + texEdge1.x * edge2.z) * det;

D3DXVec3Normalize(&tangent, &tangent);

D3DXVec3Normalize(&bitangenttest, &bitangenttest);

}

// As the bitangent equals to the cross product between the normal and the tangent running along the surface, calculate it

D3DXVec3Cross(&bitangent, &normal, &tangent);

// Since we don't know if we must negate it, compare it with our computed one above

float crossinv = (D3DXVec3Dot(&bitangent, &bitangenttest) < 0.0f) ? -1.0f : 1.0f;

bitangent *= crossinv;

/* and add it to our model buffers */

We need to create a 3x3 matrix to be able to use it to convert object normals to surface-relative ones. This matrix should be built by adding the three components up in a matrix, and then transposing it in de Vertex Shader:

// tangentin, binormalin and normalin are 3D vectors supplied by the CPU

float3x3 tbnmatrix = transpose(float3x3(tangentin,binormalin,normalin));

// then multiply any vector we need in tangent space (the ones to be compared to

// the normal in the texture). For example, the light direction:

float3 lightdirtangent = mul(lightdir,tbnmatrix);

Then we're almost done. The only thing we need to do now is pass all the converted stuff to the Pixel Shader. Inside the same Pixel Shader retrieve the normal from the texture. Now you're supposed to end up with for example the light direction in tangent space. Then do your lighting calculations as you would always do, with the only exception being the source of the normal:

// we're inside a Pixel Shader now

// texture coordinates are equal to the ones used for the diffuse color map

float3 normal = tex2D(normalmapsampler,coordin);

// color is stored in the [0,1] range (0 - 255), but we want our normals to be

// in the range op [-1,1].

// solution: multiply them by 2 (yields [0,2]) and substract one (yields [-1,1]).

normal = 2.0f*normal-1.0f;

// now that we've got our normal to work with, obtain (for example) lightdir

// for Phong shading

// lightdirtangentin is the same vector as lightdir in the VS around

// 20 lines above

float3 lightdir = normalize(lightdirtangentin);

/* use the variables as you would always do with your favourite lighting model */

// we're inside a Pixel Shader now

// texture coordinates are equal to the ones used for the diffuse color map

float3 normal = tex2D(normalmapsampler,coordin);

// color is stored in the [0,1] range (0 - 255), but we want our normals to be

// in the range op [-1,1].

// solution: multiply them by 2 (yields [0,2]) and substract one (yields [-1,1]).

normal = 2.0f*normal-1.0f;

// now that we've got our normal to work with, obtain (for example) lightdir

// for Phong shading

// lightdirtangentin is the same vector as lightdir in the VS around

// 20 lines above

float3 lightdir = normalize(lightdirtangentin);

/* use the variables as you would always do with your favourite lighting model */

Hi,

ReplyDeleteIn the snippet, before you compute the determinant, are "tex1", "tex2" and "tex3" texture coordinates or only 3D vectors in 2D (= 3DVector(x, y, 0))?

They're both D3DXVECTOR2's, or any a fancy version of a struct containing 2 floats.

DeleteD3DXVECTOR2 only overrides the minus operator when both operands are D3DXVECTOR2, not D3DXVECTOR3.

Thx for the answer.

DeleteI didn't express it the right way.

Are "tex1-3" texture coordinates or 2D-Position, which were 3D-Positions (z-member set to zero)?

They're indeed texture coordinates, in 2D.

DeleteK, thanks for your patience and time.

DeleteBy the way: Very good tutorials.

Delete