Texel Splatting

arXiv GitHub License Ko-fi
Three views of a stone circle rendered with texel splatting

Perspective-stable 3D pixel art. Render into cubemaps from fixed origins, splat to screen as world-space quads.

loading

Abstract

Rendering 3D scenes as pixel art requires that discrete pixels remain stable as the camera moves. Existing methods snap the camera to a grid. Under orthographic projection, this works: every pixel shifts by the same amount, and a single snap corrects all of them. Perspective breaks this. Pixels at different depths drift at different rates, and no single snap corrects all depths. Texel splatting avoids this entirely. Scene geometry is rendered into a cubemap from a fixed point in the world, and each texel is splatted to the screen as a world-space quad. Cubemap indexing gives rotation invariance. Grid-snapping the origin gives translation invariance. The primary limitation is that a fixed origin cannot see all geometry; disocclusion at probe boundaries remains an open tradeoff.

Introduction

Texel splatting is a rendering technique for perspective-stable pixel art. Scene geometry is rendered into a cubemap from a fixed probe origin, shaded per cubemap texel, and splatted to the screen as world-space quads.

Orthographic pixel art works because the projection is linear: a world-space offset produces the same screen-space displacement regardless of depth. Camera movement shifts every pixel by the same amount. Snapping the camera to the screen grid [12], [14] compensates for this shift uniformly, and pixels stay locked.

Perspective breaks this. Under perspective projection, the same world-space offset produces different screen-space displacements at different depths. Near geometry displaces more on screen than far geometry. No single grid snap compensates for all depths simultaneously, and no screen-space correction can. ProPixelizer documents this effect, called shimmer or pixel creep: “pixel creep can only ever be solved for orthographic projections” [14].

Texel marching [13] addresses the same problem using cubemaps from fixed origins with depth-based reprojection. Rays are marched through the cubemap in screen space, sampling stored color when intersections are found. Both methods share the same principle: cubemap parameterization from a fixed origin decouples texel stability from camera movement. The display methods differ: texel marching raymarches in screen space, texel splatting projects world-space quads.

Cubemaps [1] index by direction, not screen position. They are rotation-invariant by construction. Scene geometry is rasterized into cubemaps [3] and shaded per cubemap texel. Lighting happens in cubemap space, not screen space, so the results are camera-independent.

A cubemap needs an origin. If the origin moves with the camera, texel assignments shift frame-to-frame. Snapping the origin to a world grid [6] gives translation invariance. The probe origin is the camera position rounded to the nearest grid vertex.

Each visible cubemap texel becomes a world-space quad.

The architecture guarantees rotation invariance through cubemap parameterization and translation invariance through grid-snapped origins, producing perspective-stable pixel art without screen-space correction.

Scene geometry
Cubemap capture
Texel splatting
Figure 2. Each column pairs an orthographic diorama (top) with the camera view (bottom). The white dot marks the probe origin; the wireframe shows the camera frustum.

Method

Scene geometry is rasterized into cubemaps from fixed probe origins, shaded per texel in a compute pass, and splatted to screen as world-space quads (Figure 2). The pipeline runs each frame: capture, shade, splat.

Cubemap capture

Scene geometry is rasterized from the probe origin into cubemaps: position, normal, material, and object ID per texel. Texels are indexed by direction from the origin, giving rotation invariance.

Shading

A compute pass shades each cubemap texel. Diffuse lighting from directional and point sources; shadow rays traced against scene geometry. Shading is camera-independent. OKLab posterization [11] quantizes lightness to discrete bands, producing flat color steps across surfaces.

Outlines are detected in the same pass. Object ID discontinuities between neighboring texels mark silhouette edges; normal discontinuities mark creases. Both are applied as lightness shifts in OKLab space. Effects applied per texel (posterization, outlines, lighting) inherit this stability; screen-space effects (bloom, haze, light shafts) applied after splatting remain camera-dependent.

Probes

The probe origin is the camera position snapped to a world grid. Moving within a grid cell does not change the origin, giving translation invariance.

Three probes are maintained: an eye probe at the camera position, a grid probe at the snapped origin, and a previous probe that holds the old grid cell during transitions. The eye probe provides disoccluded content (§2.5). The grid probe provides stable content. The previous probe enables blending between grid cells.

When the camera crosses a cell boundary, the current grid origin becomes the previous origin, and the snapped position becomes the new grid origin. A 4×44\times4 Bayer dither pattern [10] crossfades between the two grid probes. Probe updates are amortized: the eye probe renders every frame; the grid and previous probes alternate, maintaining consistent cost during movement.

Splatting

Each visible cubemap texel is splatted to screen as a world-space quad [4], [5].

Each texel stores a scalar depth dd, the Chebyshev distance from the probe origin. The cubemap maps texel coordinates (u,v)(u, v) on face FF to a direction 𝐫\mathbf{r} [1]. The world position is 𝐩=𝐨+d𝐫𝐫\mathbf{p} = \mathbf{o} + \frac{d}{\|\mathbf{r}\|_\infty}\,\mathbf{r} \label{eq:reconstruct} where 𝐨\mathbf{o} is the probe origin and 𝐫=max(|rx|,|ry|,|rz|)\|\mathbf{r}\|_\infty = \max(|r_x|, |r_y|, |r_z|). Quad corners evaluate this at (u±h,v±h)(u \pm h, v \pm h) for half-texel width hh. All four share depth dd; no per-corner intersection is needed.

Adjacent texels leave gaps when splatted at exact texel boundaries. Expanding each quad beyond its boundary fills these gaps, but where neighboring texels lie at similar depths, overlapping quads produce Z-fighting. Each texel’s depth is compared against its four neighbors. At edges where depths differ, quads expand freely; the depth buffer resolves overlap. At edges where depths are similar, expansion is constrained and scaled by the grazing angle between the view direction and the cubemap face.

Disocclusion

A fixed probe origin cannot see geometry occluded from its position. As the camera moves away from the probe, regions become visible that the probe never captured. An eye probe at the camera position fills these gaps. Because the eye probe moves with the camera, its texel assignments are unstable: the same surface point maps to different texels each frame, producing shimmer [8].

Discussion

Cost model

The scene in Figure 1 renders at 40 fps on an iPhone 15 (A16 GPU, mobile Safari) and under 4 ms per frame on an RTX 4090, both at 3842384^2 texels per face. The pixel art aesthetic demands low cubemap resolution, so the shading budget is inherently small. Camera-independent shading enables caching across frames [7], [9], [2].

Limitations and future work

Disocclusion is the primary limitation. Geometry hidden from the probe origin was never captured; the eye probe fills these gaps, but shimmer is confined to regions the grid probe cannot see.

Probe density controls the balance: large cells produce more stable content but increase the disoccluded area; small cells reduce disocclusion at the cost of more frequent transitions. Adaptive probe placement, additional stable probes, specular materials, animated geometry, and level-of-detail across probe cells remain open.

References

  1. N. Greene, “Environment mapping and other applications of world projections,” IEEE CG&A, vol. 6, no. 11, pp. 21–29, 1986.
  2. R. L. Cook, L. Carpenter, and E. Catmull, “The Reyes image rendering architecture,” in Proc. SIGGRAPH ’87, 1987.
  3. T. Saito and T. Takahashi, “Comprehensible rendering of 3-D shapes,” in Proc. SIGGRAPH ’90, pp. 197–206, 1990.
  4. H. Pfister, M. Zwicker, J. van Baar, and M. Gross, “Surfels: Surface elements as rendering primitives,” in Proc. SIGGRAPH ’00, pp. 335–342, 2000.
  5. M. Zwicker, H. Pfister, J. van Baar, and M. Gross, “Surface splatting,” in Proc. SIGGRAPH ’01, pp. 371–378, 2001.
  6. W. Engel, “Cascaded shadow maps,” in ShaderX5, pp. 197–206, 2007.
  7. C. A. Burns, K. Fatahalian, and W. R. Mark, “A lazy object-space shading architecture with decoupled sampling,” in Proc. HPG ’10, 2010.
  8. P. Bénard, A. Bousseau, and J. Thollot, “State-of-the-art report on temporal coherence for stylized animations,” Computer Graphics Forum, vol. 30, no. 2, 2011.
  9. K. Hillesland and B. Yang, “Texel shading,” in Eurographics 2016 Short Papers, 2016.
  10. B. E. Bayer, “An optimum method for two-level rendition of continuous-tone pictures,” in Proc. IEEE ICC, pp. 26-11–26-15, 1973.
  11. B. Ottosson, “A perceptual color space for image processing,” 2020. [Online]. Available: https://bottosson.github.io/posts/oklab/
  12. t3ssel8r, “3D pixel art rendering in Unity.” [Online]. Available: https://www.youtube.com/watch?v=d6tp43wZqps
  13. tesseractcat, “Shadowglass 3D pixel-art style,” 2026. [Online]. Available: https://tesseractc.at/shadowglass
  14. E. Bentine, “ProPixelizer.” [Online]. Available: https://sites.google.com/view/propixelizer/

Citation

@article{ebert2026texelsplatting,
  title={Texel Splatting: Perspective-Stable 3D Pixel Art},
  author={Ebert, Dylan},
  year={2026}
}