This adds the first FBX PBR import path. Materials that have been
exported via the Stingray PBS preset should be picked up as native
metallic/roughness, and exported essentially 1:1 to the glTF output.
In more detail, this commit:
- (Re)introduces the STB header libraries as a dependency. We currently
use it for reading and writing images. In time we may need a more
dedicated PNG compression library.
- Generalizes FbxMaterialAccess to return different subclasses of
FbxMaterialInfo; currently FbxRoughMetMaterialInfo and
FbxTraditionalMaterialInfo.
- FbxTraditionalMaterialInfo is populated from the canonical
FbxSurfaceMaterial classes.
- FbxRoughMetMaterialInfo is currently populated through the Stingray
PBS set of properties, further documented in the code.
- RawMaterial was in turn generalized to feature a pluggable,
type-specific RawMatProps struct; current implementations are,
unsurprisingly, RawTraditionalMatProps and RawMetRoughMatProps. These
are basically just lists of per-surface constants, e.g. diffuseFactor or
roughness.
- In the third phase, glTF generation, the bulk of the changes are
concerned with creating packed textures of the type needed by e.g. the
metallic-roughness struct, where one colour channel holds roughness and
the other metallic. This is done with a somewhat pluggable "map source
pixels to destination pixel" mechanism. More work will likely be needed
here in the future to accomodate more demanding mappings.
There's also a lot of code to convert from one representation to
another. The most useful, but also the least well-supported conversion,
is from old workflow (diffuse, specular, shininess) to
metallic/roughness. Going from PBR spec/gloss to PBR met/rough is hard
enough, but we go one step sillier and treat shininess as if it were
glossiness, which it certainly isn't. More work is needed here! But it's
still a fun proof of concept of sorts, and perhaps for some people it's
useful to just get *something* into the PBR world.
We are at liberty to order our JSON any way we like (by spec) and we can
improve readability a lot by doing so. By default, this JSON library
uses an unordered map for objects, but it's relatively easy to switch in
a FiFo map that keeps track of the insertion order.
It's perfectly fine for materials to have neither diffuse texture nor
vertex colours. This dates back to a time when the tool had more limited
use cases.
To compensate: https://github.com/facebookincubator/FBX2glTF/issues/43
The FBX SDK absolutely claims that there is a normal layer to each
FbxShape, with non-trivial data, even when the corresponding FBX file,
upon visual inspection, explicitly contains nothing but zeroes. The only
conclusion I can draw is that the SDK is computing normals from
geometry, without being asked to, which seems kind of sketchy.
These computed normals are often not at all what the artist wanted, they
take up a lot of space -- often pointlessly, since if they're computed,
we could just as well compute them on the client -- and at least in the
case of three.js their inclusion uses up many of the precious 8 morph
target slots in the shader.
So, they are now opt-in, at least until we can solve the mystery of just
what goes on under the hood in the SDK.
Turns out Maya was always including normals in the FBX export, they were just a bit trickier to get to than originally surmised. We need to go through the proper element access formalities that takes mapping and reference modes into account.
Luckily we already have a helper class for this, so let's lean on that.
At the glTF level, transparency is a scalar; we just throw away any
color information in FBX TransparentColor. We still need to calculate
our total opacity from it, however. This is the right formula, which
additionally matches the deprecated (but still populated, by the Maya
exporter) 'Opacity' property.
This adds blend shape / morph target functionality.
At the FBX level, a mesh can have a number of deformers associated with it. One such deformer type is the blend shape. A blend shape is a collection of channels, which do all the work. A channel can consist of a single target shape (the simple case) or multiple (a progressive morph). In the latter case, the artist has created in-between shapes, the assumption being that linear interpolation between a beginning shape and an end shape would be too crude. Each such target shape contains a complete set of new positions for each vertex of the deformed base mesh.
(It's also supposed to be optionally a complete set of normals and tangents, but I've yet to see that work right; they always come through as zeroes. This is something to investigate in the future.)
So the number of glTF morph targets in a mesh is the total number of FBX target shapes associated with channels associated with blend shape deformers associated with that mesh! Yikes.
The per-vertex data of each such target shape is added to a vector in RawVertex. A side effect of this is that vertices that participate in blend shapes must be made unique to the mesh in question, as opposed to general vertices which are shared across multiple surfaces.
Blend Shape based animations become identical glTF morph target animations..
Fixes#17.
Lean on the excellent pre-existing support for creating multiple glTF
meshes from a single FBX mesh based on material type. All the triangles
with (at least one) non-opaque vertex get flagged as transparent
material. They will all go separately in their own mesh after the
CreateMaterialModels() gauntlet.
Fixes#25.
We were warnings against eInheritRSrs, which is actually the one type of
ineritance we're good with. It's eInheritRrSs we should freak out about.
That said, no need to do it for the root node -- at that point there is
no global transform to worry about.
When we convert a file that's in our CWD, on Unix the folder component
of the path will simply be "", whereas opendir() wants ".".
I want to take another more substantial pass at texture resolution, once
we're out of urgent bugfix mode.
Some FBX files have index arrays that contain -1 (indeed, that are
nothing but negative ones). Presumably the intention is to specify "no
material". In any case, let's not segfault.
When we've successfully located a referenced texture image on the local
filesystem and we're generating non-binary, non-embedded output, copy
the source folder wholesale into the destination directory.
This means the output folder is always a full, free-standing deployment,
one that can be dragged into e.g. https://gltf-viewer.donmccurdy.com/
In the FBX world, (0, 0) is generally the lower left. By the glTF
specification, (0, 0) is the upper left. The only recourse is to
literally flip all texture files (generally unwise) or to remap the UV
space.
Is this confusing in an artist-to-engineer workflow? Maybe. But it's the
best option, and it seems reasonably easy to communicate.
To request unflipped coordinates, send in a --no-flip-v command switch.
* Further improvemens to texture resolution.
- Move towards std::string over char * and FbxString where convenient,
- Make a clear distinction between textures whose image files have been
located and those who haven't; warn early in the latter case.
- Extend RawTexture so we always know logical name in FBX, original file
name in FBX, and inferred location in local filesystem.
- In non-binary mode, simply output the inferred local file basename as
the URI; this will be the correct relative path as long as the texture
files are located next to the .gltf and .bin files.
Primary remaining urge for a follow-up PR:
- We should be copying texture image files into the .gltf output folder,
but before that we should switch to an off-the-shelf cross-platform
file manipulation library like https://github.com/cginternals/cppfs.
When we make that transition, all this texture resolution code will
undergo another refactoring.