Now that we're writing both 16-bit and 32-bit integers, it's starting to
matter a little more how we slam even scalars into memory. This is maybe
not the fastest way to accomplish this, and I'm not crazy about the way
GLType works in general, but it does have the virtues of clarity and
expediency.
By oversight we had not included occlusionTexture in the core
MaterialData. While we're at it, bake occlusion into the red channel of
the merged metallic/roughness texture.
There seem to be few constraints on what values FBX properties can take. By contrast, glTF constrains e.g. common material factors to lie in [0, 1]. We take a simple approach and just clamp.
Previous to this, a PNG that was on RGBA format would cause its
corresponding texture to be flagged as transparent. This is very
silly. We now iterate over the bytes, and if any is not 255, THEN
there's alpha.
This was way overdue. Breaking up large meshes into many 65535-vertex
primitives can save a few bytes, but it's really a lot of complication
for minor benefit.
With this change the user can force short or long indices, and the
default is to use shorts for smaller meshes and longs for longer.
- KHR_materials_common never had a real life in the glTF 2.0 world. One
day we may see a new extension for Phong/Blinn/Lambert.
- PBR_specular_glossiness is a poor fit for PBS StingRay (the only real
source of PBR we have) and has no advantage over PBR_metallic_roughness.
- The conversion we were doing for traditional materials to PBR made no
sense. Revert to a very simple formula: diffuse -> baseColor, simple
reasonable constants for metallic & roughness.
The user can now ask for normals to be computed NEVER (can easily cause
broken glTF if the source isn't perfect), MISSING (when the mesh simply
lacks normals), BROKEN (only emptuy normals are replaced), or
ALWAYS (perhaps if the normals in the source are junk).
I stole expressions from Gary Hsu's PBR conversion routines here:
3606e79717/extensions/Khronos/KHR_materials_pbrSpecularGlossiness/examples/convert-between-workflows/js/three.pbrUtilities.js
which is experimental enough as it is, but I had gone further into the
domain of madness and uses this with *old* diffuse/specular values, not
PBR specular/glossness.
As a result a lot of old content was coming up with 100% metal values
quite often, which in turn means completely ignoring diffuse when
assembling a new base colour...
I should rip out this whole conversion. But not just now...
It's technically valid for e.g. scale to have a zero dimension, which in
turn wreaks havoc on the rotation quaternion we get from the FBX SDK.
The simplest solution is to just leave any T/R/S vector out of the glTF
if it has any NaN component.
Be more flexible about reading various input formats (most especially
varying numbers of channels), and stop outputting RGBA PNGs for textures
that don't need it.
I'm not sure JPG generation ever worked right. But now it does.
Fix the naming issues. Now the nodes are identified by pNode->GetUniqueID(), instead of its name. All dictionaries and references to nodes are replaced by its id, instead of its name.
This adds the first FBX PBR import path. Materials that have been
exported via the Stingray PBS preset should be picked up as native
metallic/roughness, and exported essentially 1:1 to the glTF output.
In more detail, this commit:
- (Re)introduces the STB header libraries as a dependency. We currently
use it for reading and writing images. In time we may need a more
dedicated PNG compression library.
- Generalizes FbxMaterialAccess to return different subclasses of
FbxMaterialInfo; currently FbxRoughMetMaterialInfo and
FbxTraditionalMaterialInfo.
- FbxTraditionalMaterialInfo is populated from the canonical
FbxSurfaceMaterial classes.
- FbxRoughMetMaterialInfo is currently populated through the Stingray
PBS set of properties, further documented in the code.
- RawMaterial was in turn generalized to feature a pluggable,
type-specific RawMatProps struct; current implementations are,
unsurprisingly, RawTraditionalMatProps and RawMetRoughMatProps. These
are basically just lists of per-surface constants, e.g. diffuseFactor or
roughness.
- In the third phase, glTF generation, the bulk of the changes are
concerned with creating packed textures of the type needed by e.g. the
metallic-roughness struct, where one colour channel holds roughness and
the other metallic. This is done with a somewhat pluggable "map source
pixels to destination pixel" mechanism. More work will likely be needed
here in the future to accomodate more demanding mappings.
There's also a lot of code to convert from one representation to
another. The most useful, but also the least well-supported conversion,
is from old workflow (diffuse, specular, shininess) to
metallic/roughness. Going from PBR spec/gloss to PBR met/rough is hard
enough, but we go one step sillier and treat shininess as if it were
glossiness, which it certainly isn't. More work is needed here! But it's
still a fun proof of concept of sorts, and perhaps for some people it's
useful to just get *something* into the PBR world.
We are at liberty to order our JSON any way we like (by spec) and we can
improve readability a lot by doing so. By default, this JSON library
uses an unordered map for objects, but it's relatively easy to switch in
a FiFo map that keeps track of the insertion order.
It's perfectly fine for materials to have neither diffuse texture nor
vertex colours. This dates back to a time when the tool had more limited
use cases.
To compensate: https://github.com/facebookincubator/FBX2glTF/issues/43
The FBX SDK absolutely claims that there is a normal layer to each
FbxShape, with non-trivial data, even when the corresponding FBX file,
upon visual inspection, explicitly contains nothing but zeroes. The only
conclusion I can draw is that the SDK is computing normals from
geometry, without being asked to, which seems kind of sketchy.
These computed normals are often not at all what the artist wanted, they
take up a lot of space -- often pointlessly, since if they're computed,
we could just as well compute them on the client -- and at least in the
case of three.js their inclusion uses up many of the precious 8 morph
target slots in the shader.
So, they are now opt-in, at least until we can solve the mystery of just
what goes on under the hood in the SDK.
Turns out Maya was always including normals in the FBX export, they were just a bit trickier to get to than originally surmised. We need to go through the proper element access formalities that takes mapping and reference modes into account.
Luckily we already have a helper class for this, so let's lean on that.
At the glTF level, transparency is a scalar; we just throw away any
color information in FBX TransparentColor. We still need to calculate
our total opacity from it, however. This is the right formula, which
additionally matches the deprecated (but still populated, by the Maya
exporter) 'Opacity' property.