Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exploring PGO for the Rust compiler #79442

Open
2 of 16 tasks
michaelwoerister opened this issue Nov 26, 2020 · 24 comments
Open
2 of 16 tasks

Exploring PGO for the Rust compiler #79442

michaelwoerister opened this issue Nov 26, 2020 · 24 comments
Labels
A-reproducibility Area: Reproducible / Deterministic builds C-discussion Category: Discussion or questions that doesn't represent real issues. I-compiletime Issue: Problems and improvements with respect to compile times. T-bootstrap Relevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap) T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-infra Relevant to the infrastructure team, which will review and decide on the PR/issue. T-release Relevant to the release subteam, which will review and decide on the PR/issue. WG-compiler-performance Working group: Compiler Performance

Comments

@michaelwoerister
Copy link
Member

michaelwoerister commented Nov 26, 2020

This issue is a landing place for discussion of whether and how to apply profile-guided optimization to rustc. There is some preliminary investigation of the topic in the Exploring PGO for the Rust compiler post on the Inside Rust blog. The gist of it is that the performance gains offered by PGO look very promising but we need to

  • confirm the results on different machines and platforms,
  • make sure that there are no reasons to not do PGO on the compiler, and
  • find a feasible way to implement this on CI (or find a less ambitious alternative).

Let's start with the first point.

Confirming the results

The blog post contains a step by step description of how to obtain a PGOed compiler -- but it is rather time consuming to actually do that. In order to make things easier I could provide a branch of the compiler that has all the changes already applied and, more importantly, a pre-recorded, checked-in .profdata file for both LLVM and rustc. Alternatively, I could just put up the final toolchain for download somewhere. Even better would be to make it available via rustup somehow. Please comment below on how to best approach this.

Reasons not to do PGO?

Concerns raised so far are:

  • This makes rustc builds non-reproducible -- something which I don't think is true. With a fixed .profdata file, both rustc and Clang should always generate the same output. That is -Cprofile-use and -fprofile-use do not introduce any source of randomness, as far as I can tell. So if the .profdata file being used is tracked by version control, we should be fine. It would be good to get some kind of further confirmation of that though.

  • If we apply PGO just to stable and beta releases, we don't get enough testing for PGO-specific toolchain bugs.

  • It is too much effort to continuously monitor the effect of PGO (e.g. via perf.rlo) because we would need PGOed nightlies in addition to non-PGOed nightlies (the latter of which serve as a baseline).

  • Doing PGO might be risky in that it adds another opportunity for LLVM bugs to introduce miscompilations.

  • It makes CI more complicated.

  • It increases cycle times for the compiler.

The last two points can definitely be true. Finding out whether they have to be is the point of the next section:

Find a feasible way of using PGO for rustc

There are several ways we can bring PGO to rustc:

  1. Provide rustbuild support for easily building your own fully PGOed compiler.
  2. Provide PGOed builds only for stable and beta releases, where the additional cycle time is offset by the lower build frequency.
  3. Provide a kind of "best-effort" PGO which uses out-dated (but regularly updated) profiling data, in the hope that it is accurate enough to still give most of the gains.

Let's go through the points in more detail:

  1. Easy DIY PGO via rustbuild - I think we should definitely do this. There is quite a bit of design space on how to structure the concrete build options (@luser has posted some relevant thoughts in a related topic). But overall it should not be too much work, and since it is completely opt-in, there's also little risk involved. In addition, it is also a necessary intermediate step for the other two options.

  2. PGO for beta and stable releases only - The feasibility of option (2) depends on a few things:

  • Is it acceptable from a testing point of view to build stable and beta artifacts with different settings than regular CI builds? Arguably beta releases get quite a bit of testing because they are used for building the compiler itself. On the other hand, building the compiler is a quite sensitive task.

  • Is it technically actually possible to do the long, three-phase compilation process on CI, or would we run into time limits set by the infrastructure? We might be more flexible in this respect now than we have been in the past.

  • How do we handle cross-compiled toolchains where profile data collection and compilation cannot run on the same system? A simple answer there is: don't do PGO for these targets. A possible better answer is to use profiling data collected on another system. This is even more relevant for the "best-effort" approach as described below.

Personally I'm on the fence whether I find this approach acceptable or not -- especially given that there is a third option that is potentially quite a bit better.

  1. Do PGO on a best-effort - After @pnkfelix asked a few questions in this direction, I've been looking into the LLVM profile data format a bit and it looks like it's actually quite robust:
  • Every function entry contains a hash value of the function's control flow graph. This gives LLVM the ability to check if a given entry is safe to use for a given function and, if not, it can just ignore the data and compile the function normally. That would be great news because it would mean that we can use profile data collected from a different version of the compiler and still get PGO for most functions. As a consequence, we could have a .profdata file in version control and always use it. An asynchronous automated task could then regularly do data collection and check it into the repository.

  • PGO works at the LLVM IR level, so everything is still rather platform independent. My guess is that the majority of functions has the same CFG on different platforms, meaning that the profile data can be collected on one platform and then be used on all other platforms. That might massively decrease the amount of complexity for bringing PGO to CI. It would also be great news for targets like macOS where the build hardware is too weak to do the whole 3-phase build.

  • Function entries are keyed by symbol name, so if the symbol name is the same across platforms (which it should be the case with the new symbol mangling scheme), LLVM should have no trouble finding the entry for a given function in a .profdata file collected on a different platform.

Overall I came to like this approach quite a bit. Once we have a .profdata file being just another file in the git repository things become quite simple. If it is enough for that file to be "eventually consistent" we can just always use PGO without thinking about it twice. Profile data collection becomes nicely decoupled from the rest of the build process.

I think the next step is to check whether the various assumptions made above actually hold, leading to the following concrete tasks:

  • Confirm that PGO is actually worth the trouble, i.e. independently replicate the results from the Exploring PGO for the Rust compiler blog post on different systems. (Done. See Exploring PGO for the Rust compiler #79442 (comment))
  • Verify that the LLVM profdata format is as robust as described above:
    • Try to find documentation or ask LLVM folks if support for partially out-of-date profdata is well supported and an actual design goal (see Exploring PGO for the Rust compiler #79442 (comment))
    • Try to find documentation or ask LLVM folks if platform independence is well supported and an actual design goal.
    • Ask people who have experience using this in production.
    • Try it out: Compile various test programs with out-of-date data and data collected on another platform. See if that leads to any hard errors.
  • Investigate how out-of-date profdata for rutsc typically is if it were collected only once a day (for example).
  • Investigate how big the mismatch between different platforms is. Concretely:
    • How many hash mismatches do we get on x86-64 Windows and macOS when compiling with profdata collected on x86-64 Linux?
    • How many hash mismatches do we get on Aarch64 macOS when compiling with profdata collected on x86-64 Linux?
    • What about x86 vs x86-64?
  • Investigate how much slower it is to build an instrumented compiler.
  • Investigate if using profdata leads to a significant compile time increase, that is, make sure that it is feasible to always compile with -Cprofile-use.
  • Double-check that PGO does not introduce a significant additional risk of running into LLVM miscompilation bugs. Ask production users for their experience.
  • Check if Rust symbol names with the current (legacy) symbol mangling scheme are platform-dependent, or if we would need to switch the compiler to the new scheme if want to use profdata across platforms.
  • Confirm that -fprofile-use and -Cprofile-use do not affect binary reproducibility (if used with a fixed .profdata file).

Once we know about all of the above we should be in a good position to decide whether to make an MCP to officially implement this.

Please post any feedback that you might have below!

@michaelwoerister michaelwoerister added I-compiletime Issue: Problems and improvements with respect to compile times. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-bootstrap Relevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap) T-infra Relevant to the infrastructure team, which will review and decide on the PR/issue. T-release Relevant to the release subteam, which will review and decide on the PR/issue. WG-compiler-performance Working group: Compiler Performance A-reproducibility Area: Reproducible / Deterministic builds C-discussion Category: Discussion or questions that doesn't represent real issues. labels Nov 26, 2020
@michaelwoerister
Copy link
Member Author

One concern with the "best-effort" approach that I just became aware of is how it affects performance testing: Let's say you want to optimize some expensive function F in the compiler. You rewrite it to a more efficient version F', do a try-build, and a perf.rlo run to see if your optimizations had the desired effect. However, since F was compiled with PGO data available while F' is being compiled without PGO data, it will be an unfair comparison. F will have an advantage over F' and any performance improvement might not look as good as it actually is.

@jyn514
Copy link
Member

jyn514 commented Nov 26, 2020

One concern with the "best-effort" approach that I just became aware of is how it affects performance testing

This might be overcomplicating things, but what if the default was not to use PGO builds, and enable them only for nightly/beta/stable? Then F wouldn't have an unfair advantage, but we would still get most of the benefits of the "best effort" method.

The disadvantage is that nightlies will now require a second full build; it will no longer be possible to use the latest build artifacts from bors.

@andjo403
Copy link
Contributor

a question that I think is missing is how storing .profdata in git is affecting the size of the repo. how big is the .profdata file and how mergeable is it.

@the8472
Copy link
Member

the8472 commented Nov 26, 2020

A variation on approach3: Have stage1 gather PGO data while building stage2 for an auto-merge, then save that somewhere so it can be used during the next stage2 build of anything that has that merge as its nearest ancestor in the history.

@jhpratt
Copy link
Member

jhpratt commented Nov 27, 2020

@andjo403 While by no means a solution, git LFS may be helpful regarding the size of the repo.

@michaelwoerister
Copy link
Member Author

a question that I think is missing is how storing .profdata in git is affecting the size of the repo. how big is the .profdata file and how mergeable is it.

That's a good point! .profdata files are in a binary format, I don't think they would be mergeable. They are rather big too. The .profdata files that I have for LLVM and rustc are 23 MB and 64 MB, respectively. On the other hand they compress rather well, to 3.1 MB and 9.4 MB for a .tar.xz in this case. That's still quite hefty if you add that kind of data to the repository once a day.

Using Git LFS sounds a bit problematic to me because of its reliance on external storage. Maybe the data could be stored in a separate repository that gets pulled in as a submodule? Then one would not have to pull the entire thing.

@michaelwoerister
Copy link
Member Author

There seems to be a text-based profile data format that looks pretty mergeable:

_ZNK4llvm20MemorySSAWrapperPass14verifyAnalysisEv
# Func Hash:
22759827559
# Num Counters:
2
# Counter Values:
0
0

_ZN4llvm9DIBuilder17createNullPtrTypeEv
# Func Hash:
12884901887
# Num Counters:
1
# Counter Values:
0

_ZN4llvm15SmallVectorImplINS_26AArch64GenRegisterBankInfo17PartialMappingIdxEE6appendIPKS2_vEEvT_S7_
# Func Hash:
37713126052
# Num Counters:
3
# Counter Values:
0
0
0
# Num Value Kinds:
1
# ValueKind = IPVK_MemOPSize:
1
# NumValueSites:
1
0

I don't know how well supported it is. Surprisingly it seems to be slightly more compact than the binary format (20 MB vs 23 MB and 57 MB vs 64 MB). It also compresses better than the binary format. But it would have to be stored in the repository uncompressed in order to be diffable, right? Or does git have any tricks up its sleeve that allow it to store compressed diffs?

@michaelwoerister
Copy link
Member Author

This might be overcomplicating things, but what if the default was not to use PGO builds, and enable them only for nightly/beta/stable? Then F wouldn't have an unfair advantage, but we would still get most of the benefits of the "best effort" method.

A variation of this would be to build a non-PGOed baseline compiler just for perf.rlo runs. That could happen in parallel to building the modified compiler.

@michaelwoerister
Copy link
Member Author

This slide deck from 2013 states the following design goals for LLVM's instrumentation based PGO support:

  • Degrade gracefully when code changes
  • Profile data not tied to specific compiler version
  • Minimize instrumentation overhead

The presentation probably refers to front-end based instrumentation, since the IR-level instrumentation that rustc uses was introduced two years later, but the two instrumentation approaches share the same infrastructure and I doubt that the design goals have changed in the meantime.

This blog post also talks about function CFG hashes making sure that out-dated profile data is detected and ignored. Interestingly it also mentions that the dotnet runtime uses a best-effort approach similar to the one described above.

@Mark-Simulacrum
Copy link
Member

I think it would be quite reasonable to store the PGO data files on S3 or something similar and just have the URL or something similar point to that in CI (or, optionally, local builds). I expect regardless of what we do we'll want it to be optional to enable it. I would not expect us to store them in git or similar because -- at least AFAIK -- inspecting changes to them isn't really feasible/desirable.

We basically already do this for the bootstrap compiler (i.e., it's just downloaded by hash/version) and these artifacts would be no different.

One question I have @michaelwoerister is the extent to which we can profile-use artifacts built on different machines -- are there absolute path dependencies here? Do we need some special handling for this? In particular, I would love for local developers to be able to use the same artifacts CI did without too much hassle (i.e., not in docker but just building directly). It sounds like based on what you've said this should not be a problem but would be good to be certain here. (I guess this is part of "reproducible builds" -- do I get the same profiling information across different runs on the same workload? Or does e.g. ASLR make the profiles radically different).

If the profiles are sufficiently opaque as to not care too much about the producing rustc's origins, one approach might be to use perf.rlo hardware exclusively to generate the instrumented rustc's and profile them. We already build rustc at each commit on perf.rust-lang.org in order to record the bootstrap compile times, and building it in an instrumented fashion would not be too hard, I suspect. Once we had that we could use it to gather profiling data (likely on the perf.rlo benchmarks) and feed that back into the next master commit. This would mean we're always off by one commit's worth of changes but I expect that to be a minor loss.

I think a great next step here would be to get some idea on:

  • How much slower is building an instrumented rustc?
  • How much slower is running an instrumented rustc? (i.e., what is the overhead of collection?)
    • And, relatedly, how much does instrumentation increase noise in instruction counts or other perf.rlo tracked stats -- can we use instrumented rustc's on perf?

Presuming the answer to these questions is "not much" (5% wall time is probably limit on current perf hardware; but I imagine that getting better or more hardware would not be too hard if we needed to), then I think a good series of next steps would be:

  • Land support in rust-lang/rust for a config.toml flag enabling PGO (both instrumentation and loading). AFAICT, this is mostly a matter of passing a couple flags and we'd not be enabling this on CI, so should be relatively easy I think?
  • Submit a PR to perf.rlo which enables PGO instrumentation on the rustc bootstrap. I expect this to not be terribly difficult
  • Switch perf.rlo to use that instrumented rustc when building some/all of the benchmarks, and upload collected data into S3
    • In theory this should basically not affect published results on perf.rlo if the overhead is sufficiently small, or at least be a one-time regression. If this increases noise or is otherwise problematic, we'll need to explore alternatives (e.g., a dedicated build server).
  • At this point we have PGO data for every commit on master, with roughly 2 hours delay after that commit lands on master.
    • In theory it is feasible that at this point we can also switch perf.rlo to be something rustc CI gates on -- i.e., we can remove the delay, since perf wouldn't need the pre-built binaries from CI.
  • Add support to rustbuild to use this PGO data, similarly to how we use the CI-built LLVM, when building rustc.
  • Switch that support on for CI builds
    • Presumably, this is fairly cheap? We're still building the same amount of compilers.

@michaelwoerister
Copy link
Member Author

When it comes to hosting the profile data in version control versus somewhere external, I think the main question to clarify is how (historically) reproducible we want PGO builds to be: If we store profile data in git we can go to any commit and get the exact same build because PGO data is guaranteed to be available. If we host the data externally we have less of a guarantee that the data will still be available after a few months or years.

However, after you mentioned the bootstrap compiler also being stored externally, I now realize that we already have "critical" data stored outside of version control. So storing PGO data on S3 would not make things worse at least.

One question I have @michaelwoerister is the extent to which we can profile-use artifacts built on different machines -- are there absolute path dependencies here?

Yes, there are some absolute paths in the profile data. Some symbol names are augmented with the path of their originating source file -- this seems to be necessary for ThinLTO to work properly in all cases. I only discovered this recently. But there is good news:

  • This seems to affect only a small percentage of symbol names, and
  • there is a mechanism for stripping away path prefixes, which should allow us to make the data machine independent (but requires some fiddling in rustbuild).

Overall I think this problem is solvable.

do I get the same profiling information across different runs on the same workload? Or does e.g. ASLR make the profiles radically different).

You get the same profile data if (and only if) the workload is deterministic. If there is some source of randomness, like if pointers are being hashed or compared (even without ASLR), then profile data will change. However, if we just store the profile data somewhere, things should be deterministic -- which luckily also happens to be the better approach from a build times perspective.

How much slower is building an instrumented rustc?

Not much slower but noticeable, I think. I added that question to the TODO list in the OP.

How much slower is running an instrumented rustc?

Quite noticeable. I think a 20-30% slowdown should be expected.

And, relatedly, how much does instrumentation increase noise in instruction counts or other perf.rlo tracked stats -- can we use instrumented rustc's on perf?

I don't think instruction counts would get a lot noisier -- but maybe I am wrong. Instrumentation code has to access various runtime counters in memory all the time, which might mess with the cache. And it has to write all that data to disk, which might introduce noise too.

Overall I am skeptical about completely switching perf.rlo to using instrumented builds. On the plus side it would solve the unfairness problem mentioned above. And it would make setting this up easier. But I'm a bit worried that it might skew the performance data too much.

One thing to consider here is that the accuracy of instrumentation-based profile data collection is quite independent of the underlying hardware, since it works by just containing how many times each branch is taken. So it can be moved to a slow machine without problem and, more importantly, it can be executed on machines with inconsistent performance characteristics (like in a VPS). I'm also confident that the entire perf.rlo benchmark suite is way too big and we could get the same profile data quality with something that has 10% the runtime.

So, I currently tend to think that we would be better off running data collection separately somewhere. Although it can still be based on the perf.rlo framework (running in a special mode) if that makes things easier.

We already build rustc at each commit on perf.rust-lang.org in order to record the bootstrap compile times

Is that the same compiler that is then used to run the benchmarks? I assumed that it would be much better from a maintainability standpoint to add a couple more "regular" docker-based builds for providing the instrumented compiler (one for Unix, one for Windows). They could even do the data collection right after building (because we don't need to care about hardware performance consistency).

The fairness problem mentioned above could also be solved by always using an non-PGOed compiler for perf.rlo benchmarking. In the worst case this would mean a single additional x86-64 Linux dist build, right?

@michaelwoerister
Copy link
Member Author

michaelwoerister commented Nov 30, 2020

@Mark-Simulacrum I think the first point in your list of action items (adding PGO support to rustbuild) makes sense, regardless of how we proceed exactly. I opened #79562 for discussing that in detail.

@Mark-Simulacrum
Copy link
Member

OK, so it sounds like using perf.rlo to collect data is likely not a good fit: it's both not really needed, since we expect it to be about as deterministic as running that collection in CI, and would unacceptably slow down builds too.

Is that the same compiler that is then used to run the benchmarks?

No, perf.rlo doesn't use the compiler it builds to run benchmarks today.

The unfairness problem indeed seems hard to tackle. I was initially thinking that it wouldn't be that big a deal, but I think the most unfortunate element is that we'd presumably begin to "expect" regressions from changing hot code (since it would lose PGO benefits) and that seems pretty bad. I think using non-PGO builds on perf for now is probably the way to go; I think we should be able to afford a single perf builder. It'll also be good to have something to compare against in case any weird bugs show up later on, to make sure it's not PGO being buggy.

That said, if we go with the off-by-one approach to data collection, the unfairness problem will spread to nightlies too: if a patch changing hot code lands and ships in nightly, then that patch will plausibly be a regression to nightly performance. On beta and stable we probably won't see that as much (we can land dummy README changing patches or something before release). I'm not sure if we should try to mitigate that somehow. Maybe in practice the effects of PGO on even very hot code are not major enough that this is all worrying over nothing.

So maybe it's worth taking a look at doing PGO within a single build cycle (i.e., we build a compiler, collect data, and then build another compiler) in CI. If that's feasible then it removes the unfairness problem and is all around better, I suspect.

I think it makes sense to wait until we have support in rustbuild for doing this and then see how much we can fit into e.g. x86_64-linux builders to start: if we can pull off a full PGO cycle, great, if not, we can start taking a look at other options (like, for example only doing "perfect" PGO on beta/stable and doing so across several CI cycles, and on nightly just using beta/stable PGO data perhaps).

@michaelwoerister
Copy link
Member Author

It'll also be good to have something to compare against in case any weird bugs show up later on, to make sure it's not PGO being buggy.

👍

That said, if we go with the off-by-one approach to data collection, the unfairness problem will spread to nightlies too: if a patch changing hot code lands and ships in nightly, then that patch will plausibly be a regression to nightly performance.

Yes -- I think that would be acceptable though. The unfairness problem is more of an issue for performance measurement where you want to have accurate numbers about a small change. For real-world compile times I don't think it would be noticeable. And for stable and beta you can get rid of the problem "manually" by doing an empty commit so that PGO data effectively can catch up with the actual code.

So maybe it's worth taking a look at doing PGO within a single build cycle (i.e., we build a compiler, collect data, and then build another compiler) in CI. If that's feasible then it removes the unfairness problem and is all around better, I suspect.

My estimate is that that would be intolerably slow :)

I think it makes sense to wait until we have support in rustbuild

I think so too.

@luser
Copy link
Contributor

luser commented Nov 30, 2020

You get the same profile data if (and only if) the workload is deterministic. If there is some source of randomness, like if pointers are being hashed or compared (even without ASLR), then profile data will change. However, if we just store the profile data somewhere, things should be deterministic -- which luckily also happens to be the better approach from a build times perspective.

FYI I looked into this a while back and there just isn't any straightforward way to make the workload deterministic. For Firefox builds, we settled on being comfortable with publishing the profile data and making sure that the optimized build step was deterministic given that same input. That means that anyone ought to be able to reproduce the Firefox builds we publish given the same source + profiling data we publish, which seems like a reasonable compromise.

We split the build into three separate tasks: the instrumented build, the profile collection, the optimized build. This also helped us enable PGO for cross-compiled builds like the macOS build on Linux. If you're going to have a fixed set of profile data that gets updated periodically then that simplifies things further. A lot of the Firefox build choices were made prior to switching all the builds to clang, so some of these things that are possible with LLVM PGO were not possible with MSVC/GCC PGO.

@michaelwoerister
Copy link
Member Author

FYI I looked into this a while back and there just isn't any straightforward way to make the workload deterministic. For Firefox builds, we settled on being comfortable with publishing the profile data and making sure that the optimized build step was deterministic given that same input.

Yes, I think that is the most promising approach and it works well with re-using profile data generated on other machines/platforms.

@michaelwoerister
Copy link
Member Author

#80262 added PGO support for the Rust part Linux x64 dist builds and perf.rlo shows the expected speedups for check builds and other test cases that don't invoke LLVM 🎉

I think this is confirmation enough that the results from my blog post can indeed be extrapolated to other systems too.

@FilipAndersson245
Copy link

what needs to be done to allow Windows builds to use the benefit of PGO?

@FilipAndersson245
Copy link

Is PGO on ice for non-Linux x64 builds?

@thedrow
Copy link

thedrow commented Sep 26, 2021

Since we're already using PGO for rustc on Linux x64, I wonder if we could achieve greater speedups using Facebook's BOLT.

They have a guide for optimizing Clang with BOLT.

@michaelwoerister I think we can add two items to the check list:

  • Attempt to optimize RustC with BOLT
  • Attempt to optimize LVVM with PGO and BOLT

@aminya
Copy link

aminya commented Dec 31, 2021

I noticed that PGO fails in the final step when LTO is enabled for the builds. Not sure why this happens, but I get a (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION).

@Kobzol
Copy link
Contributor

Kobzol commented Feb 25, 2022

LLVM 14 (with in-tree support for BOLT) is nearing its release. I'll try to use BOLT to optimize LLVM (just LLVM, not rustc yet) in #94381.

@jyn514
Copy link
Member

jyn514 commented Feb 3, 2023

@Kobzol what's the current status of PGO? We have it enabled on all nightly builds, right? Can we close this issue now?

@Kobzol
Copy link
Contributor

Kobzol commented Feb 3, 2023

It's enabled for x64 and Windows, not yet for macOS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-reproducibility Area: Reproducible / Deterministic builds C-discussion Category: Discussion or questions that doesn't represent real issues. I-compiletime Issue: Problems and improvements with respect to compile times. T-bootstrap Relevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap) T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. T-infra Relevant to the infrastructure team, which will review and decide on the PR/issue. T-release Relevant to the release subteam, which will review and decide on the PR/issue. WG-compiler-performance Working group: Compiler Performance
Projects
None yet
Development

No branches or pull requests