Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Merged by Bors] - Make PipelineCache internally mutable. #7205

Closed
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion crates/bevy_core_pipeline/src/bloom/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ impl FromWorld for BloomPipelines {
],
});

let mut pipeline_cache = world.resource_mut::<PipelineCache>();
let pipeline_cache = world.resource_mut::<PipelineCache>();
danchia marked this conversation as resolved.
Show resolved Hide resolved

let downsampling_prefilter_pipeline =
pipeline_cache.queue_render_pipeline(RenderPipelineDescriptor {
Expand Down
2 changes: 1 addition & 1 deletion crates/bevy_render/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ impl Plugin for RenderPlugin {
.add_stage(
RenderStage::Render,
SystemStage::parallel()
.with_system(PipelineCache::process_pipeline_queue_system)
.with_system(PipelineCache::process_pipeline_queue_system.at_start())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this strictly necessary for this PR? Probably makes sense if we ever add other systems to the stage (i.e. command buffer/render bundle based parallelization).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't try, but I believe we do need to make sure this runs before any of the render nodes. Otherwise, you potentially won't be able to access a RenderPipeline that you inserted in the same frame depending on system execution order.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The at_end on the render_system below ensures that all non-exclusive systems, including this one, run before it. Though that might change due to stageless in the not-so-distant future, it's probably fine to omit the at_start here for now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah that makes sense. I removed the at_start and left a comment instead, in case we move things around in future.

.with_system(render_system.at_end()),
)
.add_stage(
Expand Down
28 changes: 20 additions & 8 deletions crates/bevy_render/src/render_resource/pipeline_cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ use bevy_utils::{
tracing::{debug, error},
Entry, HashMap, HashSet,
};
use parking_lot::Mutex;
use std::{hash::Hash, iter::FusedIterator, mem, ops::Deref};
use thiserror::Error;
use wgpu::{PipelineLayoutDescriptor, VertexBufferLayout as RawVertexBufferLayout};
Expand Down Expand Up @@ -343,6 +344,7 @@ pub struct PipelineCache {
device: RenderDevice,
pipelines: Vec<CachedPipeline>,
waiting_pipelines: HashSet<CachedPipelineId>,
new_pipelines: Mutex<Vec<CachedPipeline>>,
}

impl PipelineCache {
Expand All @@ -357,6 +359,7 @@ impl PipelineCache {
layout_cache: default(),
shader_cache: default(),
waiting_pipelines: default(),
new_pipelines: default(),
pipelines: default(),
}
}
Expand Down Expand Up @@ -455,15 +458,15 @@ impl PipelineCache {
/// [`get_render_pipeline_state()`]: PipelineCache::get_render_pipeline_state
/// [`get_render_pipeline()`]: PipelineCache::get_render_pipeline
pub fn queue_render_pipeline(
&mut self,
&self,
descriptor: RenderPipelineDescriptor,
) -> CachedRenderPipelineId {
let id = CachedRenderPipelineId(self.pipelines.len());
self.pipelines.push(CachedPipeline {
let mut new_pipelines = self.new_pipelines.lock();
let id = CachedRenderPipelineId(self.pipelines.len() + new_pipelines.len());
new_pipelines.push(CachedPipeline {
descriptor: PipelineDescriptor::RenderPipelineDescriptor(Box::new(descriptor)),
state: CachedPipelineState::Queued,
});
self.waiting_pipelines.insert(id.0);
id
}

Expand All @@ -484,12 +487,12 @@ impl PipelineCache {
&mut self,
danchia marked this conversation as resolved.
Show resolved Hide resolved
descriptor: ComputePipelineDescriptor,
) -> CachedComputePipelineId {
let id = CachedComputePipelineId(self.pipelines.len());
self.pipelines.push(CachedPipeline {
let mut new_pipelines = self.new_pipelines.lock();
let id = CachedComputePipelineId(self.pipelines.len() + new_pipelines.len());
new_pipelines.push(CachedPipeline {
descriptor: PipelineDescriptor::ComputePipelineDescriptor(Box::new(descriptor)),
state: CachedPipelineState::Queued,
});
self.waiting_pipelines.insert(id.0);
id
}

Expand Down Expand Up @@ -632,9 +635,18 @@ impl PipelineCache {
///
/// [`RenderStage::Render`]: crate::RenderStage::Render
pub fn process_queue(&mut self) {
let waiting_pipelines = mem::take(&mut self.waiting_pipelines);
let mut waiting_pipelines = mem::take(&mut self.waiting_pipelines);
let mut pipelines = mem::take(&mut self.pipelines);

{
let mut new_pipelines = self.new_pipelines.lock();
for new_pipeline in new_pipelines.drain(..) {
let id = pipelines.len();
pipelines.push(new_pipeline);
waiting_pipelines.insert(id);
}
}

for id in waiting_pipelines {
let pipeline = &mut pipelines[id];
if matches!(pipeline.state, CachedPipelineState::Ok(_)) {
Expand Down