Replies: 3 comments
-
This is only semi-related, but I actually think that Some time ago I tried to extract the render graph into its own plugin (#6403), allowing more flexibility in "low-level" use cases working directly with the render graph and potentially moving it into a separate crate. But this first attempt didn't turn out that great and it's outdated by now, so I scrapped it. Though, maybe this use case (building custom rendering on top of the bare render graph) is too low level to support properly? |
Beta Was this translation helpful? Give feedback.
-
I would still like to have the bevy asset and component types be the interface between the main world simulation and any renderer, so that swapping renderers is easy to do. A lot of people will want to do custom renderer stuff and it would then be nice if they could do that without changing scenes and such. I commented on a bunch of things in the thread on Discord https://discord.com/channels/691052431525675048/1214720258221539358 |
Beta Was this translation helpful? Give feedback.
-
I'm working on a prototype for a new render graph that's focused on passing resources around and reducing tons of boilerplate and manual tracking of when resources were cleared or not. Feedback/PRs highly welcome: #12814. End goal is to get to an API that looks something like this: fn gtao_node(
depth: RenderGraphTexture,
normals: RenderGraphTexture,
hilbert_index_lut: RenderGraphTexture,
globals: RenderGraphBuffer,
view: RenderGraphBuffer,
slice_count: u32,
sampler_per_slide_side: u32,
temporal_jitter: bool,
render_graph: &mut RenderGraph,
) -> RenderGraphTexture {
let preprocessed_depth = render_graph.create_texture(todo!());
let ssao_noisy = render_graph.create_texture(todo!());
let ssao = render_graph.create_texture(todo!());
let depth_differences = render_graph.create_texture(todo!());
let sampler = render_graph.create_sampler(todo!());
render_graph.add_node(
ComputePass::new("gtao", GTAO_SHADER_HANDLE)
.shader_def_val("SLICE_COUNT", slice_count)
.shader_def_val("SAMPLES_PER_SLICE_SIDE", sampler_per_slide_side)
.shader_def("TEMPORAL_JITTER", temporal_jitter)
.read_texture(preprocessed_depth)
.read_texture(normals)
.read_texture(hilbert_index_lut)
.write_texture(ssao_noisy)
.write_texture(depth_differences)
.sampler(point_clamp_sampler)
.read_buffer(globals)
.read_buffer(view),
);
// +2 other passes ...
ssao
} |
Beta Was this translation helpful? Give feedback.
-
First of all, sorry that this is such a big issue with a bunch of different subjects. It's a large, interconnected problem that can't easily be taken apart without causing confusion. Also, I don't claim to have all the answers (especially for implementation details), and I wrote this mainly to start discussion on the issue, but I feel I have a good handle on the goals a rendering refactor would have.
I don't think it's a controversial take that a refactor is needed. The rendering crates are already MUCH better than even unity's SRP in terms of modularity, but there's much to be desired even for internal use.
Goals:
DrawCommand
,RenderPhase
, (RenderGraph-)Node
, etc.ExtendedMaterial
(only bevy example I can think of), Unity URP's ScriptableRenderPassStretch goal: make it possible or even easy for rendering crates (think hanabi and others) to support custom render graphs
A different view of render graphs
Currently, Bevy render graphs (and those of other engines) are modeled as frameworks, each with their own elements and utilities for users downstream to consume. But that makes them widely incompatible with libraries that aren't designed specifically for them. A render graph should instead be framed as a consumer of modular effects and components; such a system design allows render graphs to adapt to the needs of 3rd party libraries. If bevy were to adopt a more modular design for the rendering crates, libraries would adapt and improve their compatibility with custom render graphs as a side effect.
design of the rendering plugins should be as follows:
bevy_render
Basically fine as is (afaik), provides core rendering functionality and is render-graph agnostic
bevy_core_pipeline
In kind of a strange place? Seems to provide some core elements but they often aren't reusable and the pipeline itself is very tied to bevy_pbr. Ideally this crate should do very little at runtime, except maybe provide a set of embedded shader library handles and fallback textures. Otherwise, core_pipeline should provide a set of (slightly) opinionated and configurable elements of a render graph, modeled as individual plugins:
general render graph components:
reusable effects: (split into own crate?)
shader library:
a unified material abstraction: design VERY tbd
big stuff:
bevy_pbr
's RenderMeshPlugin (this should live here because everything is OPTIONAL, so 2d graphs don't have to include it)Notice I haven't included any of the built-in render pass nodes (Prepass, Deferred, Transmissive, etc.) because those are very render graph specific.
core_pipeline
should include utilities for doing each of these types of rendering, but exact implementation of the nodes is left to the user.bevy_pbr
Ideally would just be a wrapper/adapter around effects and systems from
core_pipeline
. It would still implement several render graph nodes, as well as StandardMaterial and the surrounding boilerplate, but in a sense wouldn't own the underlying rendering systems.bevy_sprite
I don't know much about this crate: if it builds on top of core_2d like bevy_pbr does, then its core elements should probably join core_pipeline. But if it just provides utilities like texture atlases it's probably fine as is.
What should the new
bevy_core_pipeline
look like?Remember, core_pipeline's job isn't to make every plugin usable by ANY render graph. Just any that it can sensibly account for. If truly everything is replaceable, at that point any significant customization is the same as a rewrite of the entire plugin, and should have the maintenance cost associated with that. In general, core_pipeline should design utilities that would work for pipelines structured like the current version of core_2d and core_3d.
A good first question to ask is what differentiates two different render graphs, from a standpoint of their public interface. Though note that "public" isn't the best word because a lot of rendering systems leak implementation details through query filters, etc. This might include the following:
The two main friction points I've run into implementing a custom render graph are the first and the fifth: often systems will filter for views
With<>
a built-in phase on them, and the other (implicit plugin dependencies) is fairly self explanatory.Another way to ask the question is what should
core_pipeline
consider canon? Basically, what are the assumptions we can make about render graphs that use its utilities? Here are a few candidates (definitely not exhaustive):Problem 1. Render Graph-scale config
Or, config that isn't as explicit. Stuff like plugins that have query filters for views with a certain phase on them.
The first item in the previous list indicates to me that a trait-based solution makes sense here, especially when render graphs are already type-indexed:
Problem 2. Plugin-level "dependency injection"
This is a much more difficult pattern to figure out, and is much more of an open question. This applies to things like specifying the source of the SSAO effect's source textures and target resolution.
This could be trait based, which seems the most rust-y way to do things, but I have no idea what such a trait would include. I toyed around with something like this:
But this seems pretty cumbersome. True system piping (blocked on systems as entities?) might make this easier (pass
SystemId
s into plugin struct), but that wouldn't have an easy way of passing along per-entity data that's been transformed from a query (EntityHashMap
?).Problem 3. A unified material abstraction
I don't have enough expertise to speak on this, but it's necessary for this refactor and desirable even right now, from what I can glean.
Problem 4. An abstraction for lighting models
0.12's system of lighting model depth ids is a large step in this direction. However if materials are to get a universal abstraction, this likely should as well. I've prototyped such a system, pre-caching deferred lighting pipelines and accessing by a type index, like
DrawFunctions
. In the deferred prepass, a render command sets a push constant in between each draw so the shader can write the correct value to the lighting_id uint texture. Then, in the deferred lighting pass, each lighting model is run in order with id/depth set accordingly. The information stored per-lighting model has to be polymorphic based on the render graph itself, though, since each render graph might require different things of each lighting model, like copying normals from a packed gbuffer to a different texture for other effects to read.note: this mainly applies to graphs with deferred rendering support, and so should still be just a component of a render graph. However, since a good material abstraction would have a notion of lighting models per-material, it might need to be bundled with the material abstraction. Render graphs that are forward-only could then just ignore the extra functionality provided. Furthermore, since materials and lighting models need to queue
PhaseItems
, there would need to be a trait signifying if a render graph supports a given lighting model (and thus the materials that utilize that lighting model).Beta Was this translation helpful? Give feedback.
All reactions