This is a recreation of the PR @robertlong submitted long ago here:
https://github.com/facebookincubator/FBX2glTF/pull/97
Refactors and whitespace conflicts made this easier.
There is still a substantial rewrite of the texture-loading and
file-path handling pending, for sometime soon.
Depending on platform, multiple versions of isnan() can easily be floating around, causing compilation headaches. Luckily we can always rely on the standard library implementation.
This finishes the first phase of the FBX2glTF refactor, breaking utility classes out where things were getting too monolithic.
There is an equally important cleanup phase coming where we wrench all the various parts of this code, including the historical ones that we've rarely touched as yet, into a single C++ style paradigm, and modernise everything to C++11 at least.
But for now, we're just picking the pieces back on the floor so we can push 0.9.6 out. It's been far too long since a release.
Did not mean to commit/push the current state of master. But rather than
mess up source control history with a force push, I'll just try to hurry
to a stable point.
Hopefully without unintentional changes to functionality. This renames header
files to .hpp, imposes a gltf/raw/fbx directory structure, extracts standalone
chunks of Fbx2Raw into distinct files, and undoes some particularly egregious
mistakes from when I knew even less C++ than I do now.
This is in anticipation of implementing 3ds Max's "Physical Material".
The condense operation recreates the vectors of surfaces, materials,
textures and vertices so as to exclude anything that isn't referenced
explicitly by a triangle. In the process, we must take care that
references from other properties are cleared out.
This fixes the case when a node references a mesh by id, and then the
mesh is deleted because no triangle references. TODO: go through other
properties and make sure the same problem doesn't exist there.
It is also possible that these vectors should be replaced by maps, at
least for the elements that (now) have unique IDs.
A mesh with a single (skinning) deformer which had zero clusters would
erroneously register as skinned, leading GetRoodNode() to an assertion
failure. Fixed.
We're still gunshy from our previous attempts at coming up with metalilc
and roughness values from diffuse/specular/shininess, but this should be
safe: a high shininess means a low roughness, and vice versa.
The FBX SDK looks for our textures and often finds them. It helpfully
tells us exactly where they are. Let's not throw that information away
and demand that the textures only exist in precisely the folders we are
aware of.
Because we make a best-effort attempt to convert materials on the old
form -- like :ambert and Phong -- to PBR materials, it can be beneficial
to the consumer of the asset to know if the asset was intentionally
authored as PBR, or if it was a conversion.
The precise details of this information is specific to the intersection
of FBX and glTF, so we're not going to bother proposing extensions; we
just drop something into the extras field, e.g.
"materials": [
{
"name": "Troll_Water",
"alphaMode": "OPAQUE",
"extras": {
"fromFBX": {
"shadingModel": "Metallic/Roughness",
"isTruePBR": true
}
},
// ... and so on.
The possible values for shadingModel are:
"<unknown>"
"Constant"
"Lambert"
"Blinn"
"Phong"
"Metallic/Roughness"
Currently isTruePBR is true for the final entry, false for the other.
However, we may well add more PBR shading models in the future, so if
you intend to use this feature to look for true PBR, use the derived
property.
Now that we're writing both 16-bit and 32-bit integers, it's starting to
matter a little more how we slam even scalars into memory. This is maybe
not the fastest way to accomplish this, and I'm not crazy about the way
GLType works in general, but it does have the virtues of clarity and
expediency.
By oversight we had not included occlusionTexture in the core
MaterialData. While we're at it, bake occlusion into the red channel of
the merged metallic/roughness texture.
There seem to be few constraints on what values FBX properties can take. By contrast, glTF constrains e.g. common material factors to lie in [0, 1]. We take a simple approach and just clamp.
Previous to this, a PNG that was on RGBA format would cause its
corresponding texture to be flagged as transparent. This is very
silly. We now iterate over the bytes, and if any is not 255, THEN
there's alpha.
This was way overdue. Breaking up large meshes into many 65535-vertex
primitives can save a few bytes, but it's really a lot of complication
for minor benefit.
With this change the user can force short or long indices, and the
default is to use shorts for smaller meshes and longs for longer.
- KHR_materials_common never had a real life in the glTF 2.0 world. One
day we may see a new extension for Phong/Blinn/Lambert.
- PBR_specular_glossiness is a poor fit for PBS StingRay (the only real
source of PBR we have) and has no advantage over PBR_metallic_roughness.
- The conversion we were doing for traditional materials to PBR made no
sense. Revert to a very simple formula: diffuse -> baseColor, simple
reasonable constants for metallic & roughness.
The user can now ask for normals to be computed NEVER (can easily cause
broken glTF if the source isn't perfect), MISSING (when the mesh simply
lacks normals), BROKEN (only emptuy normals are replaced), or
ALWAYS (perhaps if the normals in the source are junk).
I stole expressions from Gary Hsu's PBR conversion routines here:
3606e79717/extensions/Khronos/KHR_materials_pbrSpecularGlossiness/examples/convert-between-workflows/js/three.pbrUtilities.js
which is experimental enough as it is, but I had gone further into the
domain of madness and uses this with *old* diffuse/specular values, not
PBR specular/glossness.
As a result a lot of old content was coming up with 100% metal values
quite often, which in turn means completely ignoring diffuse when
assembling a new base colour...
I should rip out this whole conversion. But not just now...
It's technically valid for e.g. scale to have a zero dimension, which in
turn wreaks havoc on the rotation quaternion we get from the FBX SDK.
The simplest solution is to just leave any T/R/S vector out of the glTF
if it has any NaN component.
Be more flexible about reading various input formats (most especially
varying numbers of channels), and stop outputting RGBA PNGs for textures
that don't need it.
I'm not sure JPG generation ever worked right. But now it does.
Fix the naming issues. Now the nodes are identified by pNode->GetUniqueID(), instead of its name. All dictionaries and references to nodes are replaced by its id, instead of its name.
This adds the first FBX PBR import path. Materials that have been
exported via the Stingray PBS preset should be picked up as native
metallic/roughness, and exported essentially 1:1 to the glTF output.
In more detail, this commit:
- (Re)introduces the STB header libraries as a dependency. We currently
use it for reading and writing images. In time we may need a more
dedicated PNG compression library.
- Generalizes FbxMaterialAccess to return different subclasses of
FbxMaterialInfo; currently FbxRoughMetMaterialInfo and
FbxTraditionalMaterialInfo.
- FbxTraditionalMaterialInfo is populated from the canonical
FbxSurfaceMaterial classes.
- FbxRoughMetMaterialInfo is currently populated through the Stingray
PBS set of properties, further documented in the code.
- RawMaterial was in turn generalized to feature a pluggable,
type-specific RawMatProps struct; current implementations are,
unsurprisingly, RawTraditionalMatProps and RawMetRoughMatProps. These
are basically just lists of per-surface constants, e.g. diffuseFactor or
roughness.
- In the third phase, glTF generation, the bulk of the changes are
concerned with creating packed textures of the type needed by e.g. the
metallic-roughness struct, where one colour channel holds roughness and
the other metallic. This is done with a somewhat pluggable "map source
pixels to destination pixel" mechanism. More work will likely be needed
here in the future to accomodate more demanding mappings.
There's also a lot of code to convert from one representation to
another. The most useful, but also the least well-supported conversion,
is from old workflow (diffuse, specular, shininess) to
metallic/roughness. Going from PBR spec/gloss to PBR met/rough is hard
enough, but we go one step sillier and treat shininess as if it were
glossiness, which it certainly isn't. More work is needed here! But it's
still a fun proof of concept of sorts, and perhaps for some people it's
useful to just get *something* into the PBR world.
We are at liberty to order our JSON any way we like (by spec) and we can
improve readability a lot by doing so. By default, this JSON library
uses an unordered map for objects, but it's relatively easy to switch in
a FiFo map that keeps track of the insertion order.
It's perfectly fine for materials to have neither diffuse texture nor
vertex colours. This dates back to a time when the tool had more limited
use cases.
To compensate: https://github.com/facebookincubator/FBX2glTF/issues/43
The FBX SDK absolutely claims that there is a normal layer to each
FbxShape, with non-trivial data, even when the corresponding FBX file,
upon visual inspection, explicitly contains nothing but zeroes. The only
conclusion I can draw is that the SDK is computing normals from
geometry, without being asked to, which seems kind of sketchy.
These computed normals are often not at all what the artist wanted, they
take up a lot of space -- often pointlessly, since if they're computed,
we could just as well compute them on the client -- and at least in the
case of three.js their inclusion uses up many of the precious 8 morph
target slots in the shader.
So, they are now opt-in, at least until we can solve the mystery of just
what goes on under the hood in the SDK.
Turns out Maya was always including normals in the FBX export, they were just a bit trickier to get to than originally surmised. We need to go through the proper element access formalities that takes mapping and reference modes into account.
Luckily we already have a helper class for this, so let's lean on that.
At the glTF level, transparency is a scalar; we just throw away any
color information in FBX TransparentColor. We still need to calculate
our total opacity from it, however. This is the right formula, which
additionally matches the deprecated (but still populated, by the Maya
exporter) 'Opacity' property.
This adds blend shape / morph target functionality.
At the FBX level, a mesh can have a number of deformers associated with it. One such deformer type is the blend shape. A blend shape is a collection of channels, which do all the work. A channel can consist of a single target shape (the simple case) or multiple (a progressive morph). In the latter case, the artist has created in-between shapes, the assumption being that linear interpolation between a beginning shape and an end shape would be too crude. Each such target shape contains a complete set of new positions for each vertex of the deformed base mesh.
(It's also supposed to be optionally a complete set of normals and tangents, but I've yet to see that work right; they always come through as zeroes. This is something to investigate in the future.)
So the number of glTF morph targets in a mesh is the total number of FBX target shapes associated with channels associated with blend shape deformers associated with that mesh! Yikes.
The per-vertex data of each such target shape is added to a vector in RawVertex. A side effect of this is that vertices that participate in blend shapes must be made unique to the mesh in question, as opposed to general vertices which are shared across multiple surfaces.
Blend Shape based animations become identical glTF morph target animations..
Fixes#17.
Lean on the excellent pre-existing support for creating multiple glTF
meshes from a single FBX mesh based on material type. All the triangles
with (at least one) non-opaque vertex get flagged as transparent
material. They will all go separately in their own mesh after the
CreateMaterialModels() gauntlet.
Fixes#25.
We were warnings against eInheritRSrs, which is actually the one type of
ineritance we're good with. It's eInheritRrSs we should freak out about.
That said, no need to do it for the root node -- at that point there is
no global transform to worry about.
When we convert a file that's in our CWD, on Unix the folder component
of the path will simply be "", whereas opendir() wants ".".
I want to take another more substantial pass at texture resolution, once
we're out of urgent bugfix mode.
Some FBX files have index arrays that contain -1 (indeed, that are
nothing but negative ones). Presumably the intention is to specify "no
material". In any case, let's not segfault.
When we've successfully located a referenced texture image on the local
filesystem and we're generating non-binary, non-embedded output, copy
the source folder wholesale into the destination directory.
This means the output folder is always a full, free-standing deployment,
one that can be dragged into e.g. https://gltf-viewer.donmccurdy.com/
In the FBX world, (0, 0) is generally the lower left. By the glTF
specification, (0, 0) is the upper left. The only recourse is to
literally flip all texture files (generally unwise) or to remap the UV
space.
Is this confusing in an artist-to-engineer workflow? Maybe. But it's the
best option, and it seems reasonably easy to communicate.
To request unflipped coordinates, send in a --no-flip-v command switch.
* Further improvemens to texture resolution.
- Move towards std::string over char * and FbxString where convenient,
- Make a clear distinction between textures whose image files have been
located and those who haven't; warn early in the latter case.
- Extend RawTexture so we always know logical name in FBX, original file
name in FBX, and inferred location in local filesystem.
- In non-binary mode, simply output the inferred local file basename as
the URI; this will be the correct relative path as long as the texture
files are located next to the .gltf and .bin files.
Primary remaining urge for a follow-up PR:
- We should be copying texture image files into the .gltf output folder,
but before that we should switch to an off-the-shelf cross-platform
file manipulation library like https://github.com/cginternals/cppfs.
When we make that transition, all this texture resolution code will
undergo another refactoring.
It is not uncommon for multiple logical textures in an FBX to reference
the same filename. Each such filename should yield one buffer view only,
and all sharing textures should reference it.
- alphaMode is only BLEND for transparent materials.
- We use RawMaterial.type to figure out what's transparent.
- FBX TransparencyFactor is not opacity, but 1.0-opacity.
- Treat vertex coloured materials as transparent
- We should at least iterate over vertices here and see if any of them
actually are transparent
- Sort triangles properly: transparent ones render last!
- Nix GetFileFolder(). It was not helping. Always search for textures
- near the FBX file.
- Use RawTexture::name for the texture name and ::fileName for the
inferred local filename path.
Digging the property values and texture shadows thereof, associated with
a certain FbxSurfaceTexture, should clearly happen once per material,
not per polygon. Furthermore there is a pre-existing pattern of
Fbx-specific accessclasses in Fbx2Raw that we should follow.
Soon we'll be extracting more than Phong/Lambert properties here, and
then we'll need to do further refactoring.
We were mapping v to -v rather than 1-v, with fairly catastrophic
results. While fixing, take the trouble to introduce a more general
transformation mechanism than just an affine matrix.