Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebGPU-based renderer for the editor #221145

Open
Tyriar opened this issue Jul 8, 2024 · 12 comments
Open

WebGPU-based renderer for the editor #221145

Tyriar opened this issue Jul 8, 2024 · 12 comments
Assignees
Labels
editor-gpu Editor GPU rendering related issues feature-request Request for new features or functionality plan-item VS Code - planned item for upcoming

Comments

@Tyriar
Copy link
Member

Tyriar commented Jul 8, 2024

We're finally starting to look at implementing a WebGPU-based rendering in monaco, similar to what xterm.js uses. This issue is used to track all the work which is expected to take several months.

Project: https://github.com/orgs/microsoft/projects/1367/views/1

Related issues

Here are some historical links that might be useful:


Below copied from https://github.com/microsoft/vscode-internalbacklog/issues/4906

GPU-based rendering

branch: tyriar/gpu_exploration

How GPU rendering works

It works by assembling array buffers which represent commands to run on the GPU, these are filled on the CPU with information like the texture to use (chracter, fg, bg), location, offset, etc. xterm.js for example allocates a cols x rows array buffer that represents the viewport only and updates it on every frame where the viewport changes.

There are 2 types of shaders:

  • Vertex shader - This is run for every vertex (4 vertices per quad) and is used to transform the vertices into screen space.
  • Fragment shader - This is run for every pixel in the quad and is used to determine the color of the pixel.

How the prototype works

The WebGPU prototype works by pre-allocating a buffer that represents up to 3000 lines in a file with a maximum column length of 200. The buffers* are lazily filled in based on what's the viewport. Meaning once a line is loaded, it doesn't need to be modified again. I think it updates more aggressively currently than needed due to my lack of knowledge around finding dirty lines in Monaco.

@vertex fn vs(
	vert: Vertex,
	@builtin(instance_index) instanceIndex: u32,
	@builtin(vertex_index) vertexIndex : u32
) -> VSOutput {
	let dynamicUnitInfo = dynamicUnitInfoStructs[instanceIndex];
	let spriteInfo = spriteInfo[u32(dynamicUnitInfo.textureIndex)];

	var vsOut: VSOutput;
	// Multiple vert.position by 2,-2 to get it into clipspace which ranged from -1 to 1
	vsOut.position = vec4f(
		(((vert.position * vec2f(2, -2)) / uniforms.canvasDimensions)) * spriteInfo.size + dynamicUnitInfo.position + ((spriteInfo.origin * vec2f(2, -2)) / uniforms.canvasDimensions) + ((scrollOffset.offset * 2) / uniforms.canvasDimensions),
		0.0,
		1.0
	);

	// Textures are flipped from natural direction on the y-axis, so flip it back
	vsOut.texcoord = vert.position;
	vsOut.texcoord = (
		// Sprite offset (0-1)
		(spriteInfo.position / textureInfoUniform.spriteSheetSize) +
		// Sprite coordinate (0-1)
		(vsOut.texcoord * (spriteInfo.size / textureInfoUniform.spriteSheetSize))
	);

	return vsOut;
}
@fragment fn fs(vsOut: VSOutput) -> @location(0) vec4f {
	return textureSample(ourTexture, ourSampler, vsOut.texcoord);
}

Texture atlas

Glyphs are rendered on the CPU using the browser's canvas 2d context to draw the characters into a texture atlas. The texture atlas can have multiple pages, this is an optimization problem as uploading images is relative expensive. xterm.js creates multiple small texture atlas pages, allocates using a shelf allocator and eventually merged them into larger immutable pages as they're more expensive to upload.

Currently the prototype uses a single large texture atlas page, but it warms it up in idle callbacks for the current font and all theme token colors in the background (using the TaskQueue xterm.js util).

image

Memory usage

text_data_buffer: [wgslX, wgslY, textureIndex, ...]

texture_atlas_buffer: [positionX, positionY, sizeX, sizeY, offsetX, offsetY, ...]

textureIndex in text_data_buffer maps to texture_atlas_buffer[textureIndex * 6]

In the above, each text_data_buffer cell is 12 bytes (3x 32-bit floats), so 3000x200 would be:

3000 * 200 * 12 = 7.2MB

This is pretty insignificant for a modern GPU.

* Double buffering is used as the GPU locks array buffers until it's done with it.

Scrolling

The prototype currently scrolls extremely smoothly as at most a viewport worth of data is filled but often no viewport data will change. Then we just need to update the scroll offset so the shadow knows which cells to render.

Input

So far, the above is highly optimized for readonly scrolling. For input/file changes there are a few cases we need to target. We essentially want to get these updates to take as little CPU time as possible, even if that means leaving stale and no-longer referenced data in the fixed buffers.

Adding new lines or deleting lines

This could be supported by uploading a map whose job is to map line numbers with the index in the fixed buffer:

image

That way we only need to update indexes, not the whole line data.

Inserting characters

Simple O(n) solution is to just update the entire line. We could do tricks to make this faster but it might not be worth the effort if line length is fixed.

Fixed buffers and long lines

My plan for how the characters will be send to the GPU is to have 1 or more fixed width buffers (eg. 80, 200?) with maps that point to indexes dynamically as described in the input section and then another more dynamic buffer which supports lines of arbitrary length. This dynamic buffer will be a little less optimized as it's the edge case when coding. The fixed buffers could also be dynamically allocated based on the file to save some memory.

Other things we could do

  • Sub-pixel glyphs for smoother flow - eg. render characters at 4x the width and support offsetting the character every 0.25px.
  • Proportional font support isn't in xterm.js but it's possible without too much effort, we will need to support this anyway if we want to render widths just like the DOM renderer. The main thing this requires is some way of getting the width of the glyphs and the offset of each character in a line. Again this is an optimization problem of getting and updating this width/offst data as fast as possible.
  • I believe SPAA is possible to do on the GPU using grayscale textures.
  • Custom glyphs are supported in the terminal which allows pixel perfect box drawing characters for example like ┌───┘. Whether this looks good in monaco is up to the font settings. Letter spacing and line height will always mess with these
  • Texture atlas glyphs could be first drawn to a very small page and then packed more efficiently into a longer-term page in an idle callback or worker.
  • Texture atlas pages could be cached to disk
  • Canvas sharing - To optimize notebooks in particular we could have a shared canvas for all editors and tell the renderer that it owns a certain bounding box

Test results

These were done on terminalInstance.ts. Particularly slow frames of the test are showed.

The tyriar/gpu_exploration tests disabled all dom rendering (lines, sticky scroll, etc.) to get an idea of how fast things could be without needed to perform layouts on each frame. It's safe to assume that rendering other components would be less than or equal to the time of the most complex component (minimap is similar, but could potentially share data as well).

Scroll to top command

M2 Pro Macbook main

image

M2 Pro Macbook tyriar/gpu_exploration (all dom rendering disabled)

image

Windows gaming PC main

image

Windows gaming PC tyriar/gpu_exploration (all dom rendering disabled)

image

Scrolling with small text on a huge viewport

fontSize 6, zoomLevel -4

M2 Pro Macbook main

image

M2 Pro Macbook tyriar/gpu_exploration (all dom rendering disabled)

image

Windows gaming PC main

image

Windows gaming PC tyriar/gpu_exploration (all dom rendering disabled)

image

Very long line

Long lines aren't supported in the gpu renderer currently

Shaders run in parallel to microtasks and layout

The sample below from the Windows scroll to top test above demonstrates how the shaders execute in parallel with layout, as opposed to all after layout.

Before:

image

After:

image


Harfbuzz shaping engine is used by lots of programs including Chromium to determine various things about text rendering. This might be needed for good RTL/ligature/grapheme rendering.

@Tyriar Tyriar added feature-request Request for new features or functionality editor-rendering Editor rendering issues labels Jul 8, 2024
@Tyriar Tyriar added this to the July 2024 milestone Jul 8, 2024
@IllusionMH

This comment was marked as outdated.

@Tyriar

This comment was marked as outdated.

@Tyriar
Copy link
Member Author

Tyriar commented Aug 18, 2024

Update on my end for last week. WIP branch #225413

General

  • Rendering is fixed up when switching editors and resizing canvas
  • Correct background color is drawn (instead of black)
    image
  • Bunch of general clean up and refactors. In particular improving of variable/constant names and simplifying of the main webgpu code
  • Set up a GPULifecycle namespace with helpers that return IDisposables
  • Sorted out some high level lifecycle/leak issues
  • The GlyphRasterizer is now owned by GpuViewLayerRenderer. The idea here is that the texture atlas is shared across all editors, but different editors could have different font sizes so it's owned by the editor so multiple font sizes can be rendered (WIP, sizes aren't tracked in atlas keys yet).

Rasterization

  • Bold and italic text is now rendered
    image
  • Less garbage collection by reusing hot objects

Texture atlas

  • Multiple texture atlas pages are now addressable. There is no overflow logic yet, but glyphs are distributed whether they are alphabet chars in order to test multiple pages
  • Glyphs are uniquely identified and stored by their metadata instead of just their foreground color
    // Ignore metadata that doesn't affect the glyph
    metadata ^= (MetadataConsts.LANGUAGEID_MASK | MetadataConsts.TOKEN_TYPE_MASK | MetadataConsts.BALANCED_BRACKETS_MASK);
    return this._glyphMap.get(chars, metadata) ?? this._createGlyph(rasterizer, chars, metadata);
  • Texture atlas debug commands
    image
    • Saving texture atlas pages:
      image
    • Logging texture atlas page stats:
      image
  • Some basic unit tests for atlas and allocators
  • Only the used portion of the atlas texture is uploaded, speeding up render time when there are new glyphs significantly, especially on initial render (~15ms -> ~2ms)

Explorations

  • Explored approach for rendering of view parts, starting with the ruler.
    • I first tried to do multiple passes with a separate shader but it's more complicated than I initially thought and requires juggling some textures. Additionally, order of render passes and having them all run every time would be required for this.
    • I think the right approach here at least for simple view parts is to allow parts to register shapes into some render pass/command encoder object. This would make the ruler component even simpler than the DOM-based one as they would basically just register some fixed rectangles/lines and then refresh it when the setting changes.
  • Explored the "scratch page" idea for the texture atlas.
    • This needed more logic in the shader than expected. Uploading only relevant parts of the page texture was a big win that makes this no longer needed.

@faheemstepsharp
Copy link

Hope this become default soon.

@Tyriar
Copy link
Member Author

Tyriar commented Aug 26, 2024

@faheemstepsharp I suspect it's going to be a long road to be the default (6 months, 1 year+?). We did eventually switch the terminal to default to GPU rendering, it'll be really bad if we ship an editor that breaks text rendering though.

@Tyriar
Copy link
Member Author

Tyriar commented Aug 26, 2024

Update for @hediet and myself for last week. WIP PR #225413

Architecture

We came up with a better approach for where to stick the implementation. GPU parts are now regular "view parts" instead of being more tightly tied to the view.

this._viewLinesGpu = this._instantiationService.createInstance(ViewLinesGpu, this._context, this._viewGpuContext);

A new ViewGpuContext contains all objects needed for managing rendering to the frame (canvas element, GpuContext, command encoder, etc.). This is owned by View and will be injected to every GPU-related view part, similar to ViewContext.

if (this._context.configuration.options.get(EditorOption.experimentalGpuAcceleration) === 'on') {
this._viewGpuContext = new ViewGpuContext();
}

❔ The term "context" is becoming a little overloaded (ViewContext, ViewGpuContext, GPUContext). Maybe there's a better name for ViewGpuContext?

Drawing shapes

Built out the ObjectCollectionBuffer data structure which allows creating type-safe objects that get encoded into a Float32Array which will be used to draw shapes via the ViewGpuContext interface. This will allows view parts to easily add, remove and change sections of the Float32Array in a fairly performant way without needing to deal with the actual buffer. Done right I think this should make the implementation of simple view parts like rulers to be even simpler than the DOM-based counterpart.

const buffer = store.add(createObjectCollectionBuffer([
{ name: 'a' },
{ name: 'b' },
] as const, 5));
store.add(buffer.createEntry({ a: 1, b: 2 }));
const entry1 = buffer.createEntry({ a: 3, b: 4 });
store.add(buffer.createEntry({ a: 5, b: 6 }));
const entry2 = buffer.createEntry({ a: 7, b: 8 });
store.add(buffer.createEntry({ a: 9, b: 10 }));
entry1.dispose();
entry2.dispose();
// Data from disposed entries is stale and doesn't need to be validated
assertUsedData(buffer, [1, 2, 5, 6, 9, 10]);

This object isn't hooked up yet, just the data structure and tests are mostly done.

General

  • Lots of cleaning up of interfaces, adding jsdoc, etc.
    • Removed chars/tokenFg from the allocator interface, to makes it more clear that all an allocator's job is to take a rasterized glyph, put it into an atlas and track the usage.
  • Fixed metadata key to properly remove metadata that doesn't affect the glyph's rendering.
  • Fixed "null cells" rendering random characters to the middle of the renderer. This was happening because zeroed out sections of the buffer were all pointing at the first glyph of the first atlas page.
  • The canvas is sized to fit .overflow-guard. This is probably the final size and position of the canvas.
    image
  • Added viewport offset which now renders the characters in approximately the right position (when dpr=1 at least). The top and the bottom lines in this picture show the gpu rendering overlaid on top of the DOM rendering.
    image
  • Added #regions and organized the webgpu init code a little better.
    image
  • Added a hidden setting to enable the GPU renderer so we can merge the code with minimal impact on default rendering.

Texture atlas

  • Basic page overflow logic is done; when a page is filled it will start adding glyphs to a second page. Only 2 pages are currently supported in the shader though.
  • Handle edge cases around glyphs too large for slab or page.
  • Reduced search time for glyph's page to O(1) 740ba1c
  • More tests!

Debugging

  • New draw glyph command that will draw a single glyph.
    image
  • Consolidated all gpu-related debug command into a single Developer: Debug Editor GPU Renderer command that brings up a picker. This will let us create many debug-related commands without spamming the regular command palette.
    image

@vincentdchan
Copy link

This may be a silly question but how do you draw glyphs on WebGPU? Are you drawing the glyph map with canvas or render the font manually?

@Tyriar
Copy link
Member Author

Tyriar commented Aug 26, 2024

@vincentdchan fonts are rasterized to a texture atlas using a 2d canvas context (mostly on CPU), then the texture atlas pages are uploaded and drawn by the shader where each cell is a texture mapped to 2 triangles. So we're leveraging the browser's font rendering stack and can avoid getting into that.

@Tyriar Tyriar modified the milestones: August 2024, September 2024 Aug 26, 2024
@Tyriar Tyriar added the plan-item VS Code - planned item for upcoming label Aug 26, 2024
@vincentdchan
Copy link

vincentdchan commented Aug 27, 2024

@Tyriar Great! I am implementing a canvas in WebGPU but AFAIK, 2D canvas doesn't provide API to detect font ligatures. One approach I used is reading from the font file. Do you handle font ligatures?

@Tyriar
Copy link
Member Author

Tyriar commented Aug 27, 2024

@vincentdchan it does not, ligatures are still not supported in the terminal but they're close. The approach uses there is to parse out the ligatures from the font file and then draw the character sequences to the canvas together via a "character joiner" concept, such that they are rendered as ligatures: https://github.com/xtermjs/xterm.js/blob/f186475ec9375256d304fb4563160e2cd3fef291/addons/addon-ligatures/src/LigaturesAddon.ts#L35

There's also a set of "fallback" ligatures which makes it mostly work when font access isn't granted on the web.

@Tyriar
Copy link
Member Author

Tyriar commented Sep 4, 2024

New public GH project to track this work: https://github.com/orgs/microsoft/projects/1367

Tyriar added a commit that referenced this issue Sep 11, 2024
@Tyriar
Copy link
Member Author

Tyriar commented Sep 16, 2024

I didn't have too much time last week to work on this but here's what we got done:

  • Multiple font sizes and font families now work at the same time
    Image
  • Correct tab offset calculation based on editor.tabSize (see screenshot above)
  • Texture atlas is cleared when the client is zoomed in or out, so text no longer shows as wrong size
  • Implemented a simple version of canRender and respected that in the GPU renderer, so lines won't get double rendered anymore
  • Start WIP for the "rectangle renderer" which will be the backend for drawing cursors, current line height, rulers, etc. Implement rectangle renderer and gpu rulers view part #228632

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
editor-gpu Editor GPU rendering related issues feature-request Request for new features or functionality plan-item VS Code - planned item for upcoming
Projects
None yet
Development

No branches or pull requests

5 participants