Technical Overview
Last updated
Last updated
Needle Engine roughly consists of three parts:
a number of components and tools that allow you to set up scenes for Needle Engine from e.g. the Unity Editor.
an exporter that turns scene and component data into glTF.
a web runtime that loads and runs the produced glTF files and their extensions.
The web runtime uses three.js for rendering, adds a component system on top of the three scene graph and hooks up extension loaders for our custom glTF extensions.
Effectively, this turns tools like Unity or Blender into spatial web development powerhouses – adding glTF assets to the typical HTML, CSS, JavaScript and bundling workflow.
Models, textures, animations, lights, cameras and more are stored as in Needle Engine. Custom data is stored in . These cover everything from interactive components to physics, sequencing and lightmaps.
A typical production glTF created by Needle Engine uses the following extensions:
Other supported extensions:
Supported material extensions:
Note: Materials using these extensions can be exported from Unity via UnityGLTF's
PBRGraph
material.
Note: Audio and variants are already supported in Needle Engine through
NEEDLE_components
andNEEDLE_persistent_assets
, but there are some options for more alignment to existing proposals such asKHR_audio
andKHR_materials_variants
.
Needle Engine stores custom data in glTF files through our vendor extensions. These extensions are designed to be flexible and allow relatively arbitrary data to put into them. Notably, no code is stored in these files. Interactive components is restored from the data at runtime. This has some similarities to how AssetBundles function in Unity – the receiving side of an asset needs to have matching code for components stored in the file.
We're currently not prodiving schemas for these extensions as they are still in development. The JSON snippets below demonstrates extension usage by example and includes notes on architectural choices and what we may change in future releases.
References between pieces of data are currently constructed through a mix of indices into other parts of the glTF file and JSON pointers. We may consolidate these approaches in a future release. We're also storing string-based GUIDs for cases where ordering is otherwise hard to resolve (e.g. two components referencing each other) that we may remove in the future.
This extension contains per-node component data. The component names map to type names on both the JavaScript and C# side. Multiple components with the same name can be added to the same node.
Note: Storing only the component type name means that type names currently need to be unique per project. We're planning to include package names in a future release to loosen this constraint to unique component type names per package instead of globally.
Note: Currently there's no versioning information in the extension (which npm packaage does a component belong to, which version of that package was it exported against). We're planning to include versioning information in a future release.
Note: Currently all components are in the
builtin_components
array. We might rename this to justcomponents
in a future release.
This is a root extension defining ambient lighting properties per glTF file.
Note: This extension might have to be defined per-scene instead of per-file.
This is a root extension defining a set of lightmaps for the glTF file.
Note: At the moment this extension also contains environment texture references. We're planning to change that in a future release.
Lightmap
0
Environment Map
1
Reflection Map
2
Note: We may change that in a future release and move lightmap-related data to a
NEEDLE_lightmap
extension entry per node.
Components in NEEDLE_components
can reference data via JSON Pointers. The data in NEEDLE_persistent_assets
is often referenced multiple times by different components and is thus stored separately in a root extension. By design, they are always referenced by something else (or have references within each other), and thus do not store type information at all: they're simply pieces of JSON data and components referencing them currently need to know what they expect.
Examples for assets/data stored in here are:
AnimatorControllers, their layers and states
PlayableAssets (timelines), their tracks and embedded clips
SignalAssets
...
Data in persistent_assets
can reference other persistent_assets
via JSON Pointer, but by design can't reference NEEDLE_components
. This is similar to the separation beween "Scene data" and "AssetDatabase content" in Unity.
Note: We might include more type and versioning information in the future.
Note: Currently, vertex and fragment shaders are always embedded as URI; we plan on moving that data into more reasonable bufferViews in the future.
Note: There's some redundant properties in here that we plan on cleaning up.
🏗️ Under Construction
🏗️ Under Construction
While Unity's compilation process from C# to IL to C++ (via IL2CPP) to WASM (via emscripten) is ingenious, it's also relatively slow. Building even a simple project to WASM takes many minutes, and that process is pretty much repeated on every code change. Some of it can be avoided through clever caching and ensuring that dev builds don't try to strip as much code, but it still stays slow.
We do have a prototype for some WASM translation, but it's far from complete and the iteration speed stays slow, so we are not actively investigating this path right now.
When looking into modern web workflows, we found that code reload times during development are neglectible, usually in sub-second ranges. This of course trades some performance (interpretation of JavaScript on the fly instead of compiler optimization at build time) for flexibility, but browsers got really good at getting the most out of JavaScript.
We believe that for iteration and tight testing workflows, it's beneficial to be able to test on device and on the target platform (the browser, in this case) as quickly and as often as possible - which is why we're skipping Unity's entire play mode, effectively always running in the browser.
Note: A really nice side effect is avoiding the entire slow "domain reload" step that usually costs 15-60 seconds each time you enter Play Mode. You're just "live" in the browser the moment you press Play.
More extensions and custom extensions can be added using the export callbacks of UnityGLTF (not documented yet) and the of three.js.
For production, we compress glTF assets with . Textures use either etc1s
, uastc
, webp
or no compression, depending on texture type. Meshes use draco
by default but can be configured to use meshtopt
(per glTF file). Custom extensions are passed through in an opaque way.
See the page for more information
Data in NEEDLE_components
can be animated via the currently not ratified extension.
This extension contains additional per-node data related to state, layers, and tags. Layers are used for both rendering and physics, similar to how and treat them.
Note: We may need to better explain why this is not another entry in .
How lightmaps are applied is defined in the MeshRenderer
component inside the extension per node:
This extension builds upon the archived extension and extends it in a few crucial places. While the original extension was specified against WebGL 1.0, we're using it with WebGL 2.0 here and have added a number of uniform types.