I've added the sdk directory to the docker ignore.
With the SDK ignored, we can install the SDK before
adding the project source. This allows the docker image
to preserve most of the layer caches when source code is
changed.
On a system with GCC 5.* Conan will conservatively choose 'libstdc++'
over 'libstdc++11' for compiler.libcxx, and then proceed to download
libraries compiled with the older ABI.
Meanwhile, though, our own CMake setup dictates the use of the modern
ABI, and the result is an application binary with ABI mismatches that
yield SIGSEGVs almost immediately.
Here, we guard against erronous invocations, and gently push the user
towards sending in the right explicit override for their system.
Lifted from comment in source:
Individual animations are often concatenated on the timeline, and the
only certain way to identify precisely what interval they occupy is to
depth-traverse the entire animation stack, and examine the actual keys.
There is a deprecated concept of an "animation take" which is meant to
provide precisely this time interval information, but the data is not
actually derived by the SDK from source-of-truth data structures, but
rather provided directly by the FBX exporter, and not sanity checked.
Some exporters calculate it correctly. Others do not. In any case, we
now ignore it completely.
With this, we are able to get rid of all the increasingly broken file
system utility code, and trust boost::filesystem to handle all the
cross-platform complexity.
The first version of this PR centred around C++17 & std::filesystem,
but support remains too elusive; it seems works out of the box in
Visual Studio (especially 2019), but is entirely missing from the Mac
dclang, and even with GCC 8.0 it requires an explicit '-l c++fs'.
Luckily the std:: version is almost exactly the boost:: version (not
surprising) so when the world's caught up, we can ditch Boost and go
all stdlib.
Setting up Conan requires a bit of work; we'll want to document the
details in the README.
Alright, less haphazardly now after the two previous botched commits,
this fixes mistakes and bugs made a year or more in the past:
- We now always pass the metallic and roughness factors through all
the way to the glTF layer. They should not be multiplied into the
generated textures, and so they should be present as-is in glTF
output.
- We only generate the AO/Rough/Net combined texture if at least two
of the constituent textures are present.
- We only reference the generated texture as an occlusionTexture if
there really was an occlusion map present (and it had non-trivial
pixels).
It's also now ridiculously clear that:
- The material conversion section is long and tortured and it's very
easy to screw up. It should be broken into functions and classes.
- We urgely need a real regression suite, and we need to model some
artificial FBX files that test both realistic scenarios and edge-
case permutations.
At the end of the various material/mesh transformations we do, we were still using a ridiculously simplistic method of mapping RawMaterial to glTF MaterialData.
This switches to using FBX's GetUniqueID(), which should be the law of the land in general. Other model entities may need further investigation as well.