October Project: Block update perf experiments
Posted by Jeff Disher
October Project: Block update perf experiments
I made some improvements to the block update logic within the primary 15-bit octree storage but I still wonder if that will be enough.

A worst-case scenario of writing a unique block type to all 32k blocks in a cuboid takes about 7 seconds, on my system. This number is bad but I am not sure that it matters since the normal case (of a largely uniform cuboid) is much faster and even a highly active server will probably only have maybe a few hundred block updates per tick, all distributed across different cuboids, processed on different threads.

For symmetry, loading all of those values after writing them takes about 3.1 seconds. This is actually more of a concern since the client will need to bake these into models or other projections when they first arrive (although it can do that in the background and incrementally update in response to changes).

I think that this should be sufficient, at least for now, although I may need to re-think how to represent this data if it becomes a problem in the future. The reason this case is interesting is that the 15-bit octree doesn't have an abstract representation but the logic operates directly on its serialized form (since sparse representations of this kind of data are bad, at the best of times).

Next step is to build out the incremental projection logic and the reversible mutation design. That will allow client-server de-sync to still produce a logical environment with diverging states naturally coalescing.

If nothing else, the project gives me some interesting things to think about,
Jeff.