Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Never allow reads from uninitialized memory in safe Rust #837

Closed
wants to merge 7 commits into from

Conversation

aturon
Copy link
Member

@aturon aturon commented Feb 13, 2015

Set an explicit policy that uninitialized memory can never be exposed
in safe Rust, even when it would not lead to undefined behavior.

See rust-lang/rust#20314.

Rendered

@aturon
Copy link
Member Author

aturon commented Feb 13, 2015

@aturon aturon self-assigned this Feb 13, 2015
will serve to set an explicit policy that:

**Uninitialized memory can ever be exposed in safe Rust, even when it
would not lead to undefined behavior**.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume that you mean "can never be exposed". 😀

# Summary

Set an explicit policy that uninitialized memory can ever be exposed
in safe Rust, even when it would not lead to undefined behavior.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed the one below first, but this also should say "can never".

@aturon
Copy link
Member Author

aturon commented Feb 13, 2015

@quantheory Shortest "Detailed Design" ever and I still managed to insert crucial typos!

memory and typesafe, but they carry security risks.

In particular, it may be possible to exploit a bug in safe Rust code
that leads that code to reveal the contents of memory.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit: I would say "causes" rather than "leads".

@quantheory
Copy link
Contributor

I'm pretty good at making typos myself! I do want to take a crack at copyediting all the RFCS one of these days...

Regarding the proposal itself, 👍. I think that in a lot of the cases where it's "obvious" that initialization is unnecessary, a good dead code elimination pass has some chance of optimizing out the extra cost. Anyway, the cost is likely to be acceptable given that you're incurring some cost by allocating the buffer in the first place and/or doing whatever operation fills it with the data you actually wanted.

@vadimcn
Copy link
Contributor

vadimcn commented Feb 13, 2015

So, what would this RFC mean in practical terms? Sounds like
a) uninitialized memory cannot leave unsafe{} blocks;
b) one cannot pass uninitialized memory to safe methods; even inside an unsafe{} block?

@aturon
Copy link
Member Author

aturon commented Feb 13, 2015

@vadimcn

Yes, and (b) in particular means that we would have to zero out memory or something similar in the implementation of read_to_end and friends.

@nikomatsakis
Copy link
Contributor

👍

@pythonesque
Copy link
Contributor

Given that there are really not too many valid uses for reading uninitialized memory even in unsafe code, this seems like a welcome change for consistency.

@alexcrichton
Copy link
Member

I'm personally not quite in favor of this proposal. I think that it's a fine line in how we leverage unsafe for various behaviors. Some topics, like memory safety, are black and white when it comes to unsafe. Other topics, like "security-related code" for example, I think are a bit more gray.

Taken to an extreme, I would find it quite unfortunate if the "I/O rule of thumb" was to use unsafe_read_to_end (or whatever the equivalent is) because it is faster. Reaching to unsafe when no memory/type unsafety is necessary seems like a pretty huge hammer (as it then allows the type/memory unsafety). In other words, one could reach for unsafe_read_to_end for performance and then accidentally call an unsafe function they did not intend.

Put another way, the motivation here seems to solely be "uninitialized memory is a security risk," but I think there is a much broader issues about the role of unsafe and "secure code". For example should I make the get_password method on a structure unsafe because it exposes the plaintext contents of the password? (it's data I don't want leaked on the wire of course). We haven't in the past used unsafe for "not secure" code and I would be worried of the trend this may be starting.

@quantheory
Copy link
Contributor

@alexcrichton @aturon

I think that there's a problem of semantics here. Most discussions of "memory safety" that I can recall consider reads of uninitialized data to be unsafe by definition, because of issues like:

  1. Uninitialized bare pointers or indices can lead to UB, in the "nasal demons" sense.
  2. Uninitialized objects that can be corrupt usually are corrupt, meaning that use of such objects is a threat to type safety, and can lead indirectly to UB.
  3. Use of uninitialized data causes a program that is expected to be deterministic to become non-deterministic (in a similar manner to data races in non-thread-safe programs).
  4. An attacker may be able to use the read to view privileged data.
  5. An attacker may be able to write to a section of memory that they know will later be erroneously used by a routine that reads uninitialized data, thus spoofing data that's assumed to come from another source.

The usual safety guarantees in Rust protect you from problem 1. They don't protect you from problem 2; an uninitialized Vec is obviously not memory safe, since internally the pointer, length, and capacity could be set to anything at all. You could draw a line in the sand and say that only some uninitialized data is allowed in safe code (e.g. an uninitialized u8 is OK but an uninitialized Vec<T> is not). But you still have problems 3-5 to contend with (and probably others I haven't thought of).

If you define memory safety as meaning only that you avoid undefined behavior in the C/C++ sense, then you can get away with some reads of uninitialized data in safe code. But it's tricky business, and a lot of people would still not consider that to be consistent with what they expect "memory safe" code to allow.

@alexcrichton
Copy link
Member

I think that there's a problem of semantics here. Most discussions of "memory safety" that I can recall consider reads of uninitialized data to be unsafe by definition.

Note that the motivation for this RFC is clear in that this is not a question of memory safety. The only cases being considered are uninitialized arrays of scalars which cannot result in memory safety violations if read.

In this sense points 1/2 are somewhat moot as the "objects" are scalars which basically can't be corrupt. Points 3/4/5 do not actually affect memory safety but play into unsafety when related to security.

@aturon
Copy link
Member Author

aturon commented Feb 17, 2015

@alexcrichton

I'm personally not quite in favor of this proposal. I think that it's a fine line in how we leverage unsafe for various behaviors. Some topics, like memory safety, are black and white when it comes to unsafe. Other topics, like "security-related code" for example, I think are a bit more gray.

Absolutely. Which is exactly why I felt this topic merited an RFC, since this is a gray area for which there's not complete agreement, and we need to make a clear policy decision.

Taken to an extreme, I would find it quite unfortunate if the "I/O rule of thumb" was to use unsafe_read_to_end (or whatever the equivalent is) because it is faster. Reaching to unsafe when no memory/type unsafety is necessary seems like a pretty huge hammer (as it then allows the type/memory unsafety). In other words, one could reach for unsafe_read_to_end for performance and then accidentally call an unsafe function they did not intend.

That does sound bad! But I'll note that if this policy turns out to be too expensive in practice -- that is, if people end up reaching for unsafe a lot because of it -- it would be easy to revert later. Unsafe APIs might simply transition to safe, and APIs that internally zeroed (like read_to_end likely would) can simply stop zeroing. I think it's harder to transition in the other direction.

Put another way, the motivation here seems to solely be "uninitialized memory is a security risk," but I think there is a much broader issues about the role of unsafe and "secure code". For example should I make the get_password method on a structure unsafe because it exposes the plaintext contents of the password? (it's data I don't want leaked on the wire of course). We haven't in the past used unsafe for "not secure" code and I would be worried of the trend this may be starting.

I understand where you're coming from here, but as a rule, I generally prefer to avoid arguments from extremes/slippery slope arguments. We as a community get to decide exactly what safety covers, and thanks to the RFC process we can ensure that changes to this policy are deliberate and relatively clear.

I feel like this is a way of framing the issue in much more extreme terms than the RFC is actually talking about. Yes, there are many other security issues, but this RFC is about a specific policy decision with concrete, quantifiable tradeoffs. In the case of IO -- which seems to be the main culprit at the moment -- we can and should measure the performance impact here.

But as we're seeing with hashing, it's possible to provide some localized mitigation against security/DoS risks based on a specific cost/benefit analysis. If the local downsides are not too great, why not take steps where we can to be even more ambitious with the problems that Rust helps you catch?

I know that in general we've been moving away from opinionation (green threading, high-level IO abstractions, etc), but we shouldn't go into full retreat! Especially in cases like this, and hashing, where there are clear and relatively simple ways to opt out of the guarantee for performance-critical scenarios, without paying any extra performance costs.

@quantheory
Copy link
Contributor

@alexcrichton

The only cases being considered are uninitialized arrays of scalars which cannot result in memory safety violations if read.

That is what I was getting at. If I wasn't clear, I think that we are in perfect agreement about all of the concrete technical details and what the RFC is intended to do. (Though I'd add a nitpick that "scalar" is not entirely the right word; references/pointers, and in some languages strings are considered scalars, after all.)

Points 3/4/5 do not actually affect memory safety but play into unsafety when related to security.

I think we've miscommunicated. My point is that this is true for some definitions of "memory safety", and trivially false for others, and there is (to my knowledge) no authoritative source for what the "true" definition is. A lot of people would consider a program to be memory-unsafe any time it has a read of uninitialized data which causes non-deterministic behavior, even if undefined behavior is not possible.

(See for instance this blog post bemoaning the fact that the definition of memory safety really is not universal and black-and-white. Some of its sources consider reads of uninitialized to be memory errors, while others don't, though unfortunately I think some of the academic stuff is paywalled. Note also that many developer groups de facto consider anything flagged by valgrind's Memcheck or similar tools to be memory-unsafe, except for memory leaks and edge cases not covered by the tool. Typically this includes use of uninitialized data.)

@kmcallister
Copy link
Contributor

I don't think this RFC is about the definition of "memory safety". I think it's about what developers expect from a "safe" language. This is a fuzzy notion, driven by humans rather than computer science, but it's a vital one.

In C nobody would bat an eye at this design. But Rust does the safe thing 99.8% of the time.

@mahkoh
Copy link
Contributor

mahkoh commented Feb 17, 2015

@aturon:

APIs that internally zeroed (like read_to_end likely would) can simply stop zeroing.

If that is the case then you might as well not zero to begin with. People who want it zeroed for security reasons cannot use it because it might change at any time unless it is documented.

@aturon
Copy link
Member Author

aturon commented Feb 17, 2015

@mahkoh

APIs that internally zeroed (like read_to_end likely would) can simply stop zeroing.

If that is the case then you might as well not zero to begin with. People who want it zeroed for security reasons cannot use it because it might change at any time unless it is documented.

I wasn't talking about leaving it undocumented; the RFC is (I think) quite clearly setting an explicit policy. I'm just saying that, at some later point, it may be possible to re-evaluate and change this policy -- but we would presumably only do so if there was a strong consensus that the security benefits were minimal and performance drawbacks onerous.

Regardless, I don't think this particular point has much bearing on the debate; it was responding to a hypothetical doomsday scenario. The question is, what is the best policy decision we can make given what we know today?

@mahkoh
Copy link
Contributor

mahkoh commented Feb 17, 2015

I'm just saying that, at some later point, it may be possible to re-evaluate and change this policy -- but we would presumably only do so if there was a strong consensus that the security benefits were minimal and performance drawbacks onerous.

If read_to_end is documented to zero then this cannot be changed post 1.0.

@aturon
Copy link
Member Author

aturon commented Feb 17, 2015

@mahkoh

If read_to_end is documented to zero then this cannot be changed post 1.0.

I see what you're getting at, but I don't think we should document it as literally zeroing. Rather, this RFC would establish a global policy about uninitialized memory and safe code.

That is, no code would be allowed to rely (in the spec sense) on the values being literally zeros.

@aturon
Copy link
Member Author

aturon commented Feb 17, 2015

@alexcrichton BTW, I seem to remember you taking some measurements about the perf hit for IO. Any chance you could dig those up?

@mahkoh
Copy link
Contributor

mahkoh commented Feb 17, 2015

@aturon: The same applies if it is documented as overwriting the allocated memory before it is passed to read. This implies that you can freely pass this memory to an attacker without worrying about leaking data. Changing this post 1.0 is not possible.

@huonw
Copy link
Member

huonw commented Feb 18, 2015

(I think the RFC could probably clarify its use of terminology to disambiguate.)

@mahkoh
Copy link
Contributor

mahkoh commented Feb 18, 2015

I don't believe there is a need for such a distinction.

@huonw
Copy link
Member

huonw commented Feb 18, 2015

It doesn't make sense for undefined behaviour in Rust to be directly tied to the back-end the main implementation happens to be using now.

@mahkoh
Copy link
Contributor

mahkoh commented Feb 18, 2015

That may be so but the current set of undefined behaviors seems to be derived from things that cause UB in LLVM or are expected to cause UB in LLVM once better optimizations become available. Is there any justification at all for making this UB except to stop people from doing it?

@pythonesque
Copy link
Contributor

I retract my vote in favor of this RFC if we're actually treating reads of uninitialized memory as UB, not just "don't do this in safe code." There are legitimate reasons to do this in unsafe code.

@codyps
Copy link

codyps commented Feb 18, 2015

@huonw : reading from undef or uninitialized memory is already undefined behavior. This rule goes further than limiting reading from undef: it eliminates our ability to pass undef outside of unsafe even if that undef is never read.

@aturon
Copy link
Member Author

aturon commented Feb 18, 2015

META: I'm going to need to step away from this debate for the next couple of days to help with the alpha2 push. I do think that several good questions/concerns have been raised and that the text needs to be more clear.

@nikomatsakis
Copy link
Contributor

@pythonesque I tend to think that calling such reads undefined behavior may be too strong, but it's worth pointing out that some such cases are undefined behavior in C (and I have no doubt that LLVM will exploit that). See e.g. https://www.securecoding.cert.org/confluence/display/seccode/EXP33-C.+Do+not+read+uninitialized+memory. In any case, I'm curious to know what legit reason you have in mind for reading uninitialized memory?

I think that at minimum it makes sense to say that the standard library will not expose uninitialized memory to its clients without an unsafe keyword being required somewhere (and moreover that all libraries are encouraged to follow a similar rule). The exact rules that should apply to third party code -- and the minimum rules required for unsafe code to be considered stable and well-defined -- seem like a somewhat separate (but entangled) topic.

At least that's my current feeling.

@codyps
Copy link

codyps commented Feb 18, 2015

If we consider accepting this RFC, we need a mechanism to maintain our ability to create efficient apis in a manner transparent to the user of the API. For the specific case of Read and read_to_end, we could probably add a MaybeUndef type that wraps the out argument & provides an unsafe fn unwrap(self) -> &[u8] method to obtain the (potentially uninitialized) raw vector or a get that zeros it if needed.

Frankly, we might need a more general way to indicate that arguments are intended to be used as out args (so the compiler can check against improper usage).

@codyps
Copy link

codyps commented Feb 18, 2015

@nikomatsakis the problem is that this RFC isn't restricting "C/LLVM undef behavior", it's making it so that bugs in other code don't trigger "C/LLVM undef behavior".

The RFC goes much further than forbidding the reading of uninitialized memory (which is already forbidden/undefined).

@nikomatsakis
Copy link
Contributor

Ah, I just remembered something we used to do in a past life that was a (perhaps?) legitimate use of uninitialized memory. The idea was you wanted to construct a set of integers of fixed-size domain but not pay O(n) initialization costs. (We were storing sets of nodes from a potentially very large control-flow-graph.)

Because you do no want to pay O(n) cost, you can't even take the time to zero out any memory. The way we did this was to have two arrays, both of the same length, neither initialized, and a counter with the number of entries. One array contains, for each possible value, the length of the set at the time it was added (its "index"). And the other contains, for each possible index, the value that was added at that time. This way you can "check-and-double-check" to find out if a value was indeed added without every initializing the full arrays. This does require reading uninitialized memory, though, because you must read from the indices array. Here is a rough sketch of the idea in Rust code:

struct IntSet {
    len: usize,
    capacity: usize,
    values: Box<[usize]>,
    indices: Box<[usize]>,
}

fn insert(&mut self, value: usize) {
    assert!(value < self.capacity);
    if self.contains(value) { return; }
    let index = self.len;
    self.len += 1;
    self.indices[value] = index;
    self.values[index] = value;
}

fn contains(&self, value: usize) -> bool {
    let index = self.indices[value];
    if index >= self.len { return false; }
    self.values[index] == value
}

Now, for all I know this violates some C rules. I haven't bothered to check of course.

@pythonesque
Copy link
Contributor

@nikomatsakis Yes, that is the example I was thinking of.

@alexcrichton
Copy link
Member

Sorry I have been a little slow to respond to this thread everyone!

@aturon

@alexcrichton BTW, I seem to remember you taking some measurements about the perf hit for IO. Any chance you could dig those up?

It looks like a TCP stream is slowed down by about 25%. Here I'm just writing tons of data on one thread to another over a TCP socket and printing out the amount of data received each second. The numbers at the bottom show the measurements for me at least. I wanted to get at least one "real-ish world" example (hence the TCP stream).

For a more "raw benchmark" I wrote some benchmarks for just reading some repeated data with zeroing both before and afterwards. It looks like the data is roughly the same in that the performance hit is about 20%.

I understand where you're coming from here, but as a rule, I generally prefer to avoid arguments from extremes/slippery slope arguments. We as a community get to decide exactly what safety covers, and thanks to the RFC process we can ensure that changes to this policy are deliberate and relatively clear.

I think this is a good point, and I definitely agree. I think I just wanted to make sure that we were aware of the situation it may put us in. When it comes down to measurements, though, 20% isn't all that much...


@quantheory

I think we've miscommunicated. My point is that this is true for some definitions of "memory safety", and trivially false for others, and there is (to my knowledge) no authoritative source for what the "true" definition is.

Ah yes good point, I think I see where you're coming from now!

@eternaleye
Copy link

When it comes down to measurements, though, 20% isn't all that much...

For a systems language where the standard (IO?) library is focused on "zero-cost abstractions" I rather disagree...

In addition, I think @mahkoh's point about "relying on" this behavior is subtler than the point that was addressed in responses.

Specifically, this proposal would incur costs without the benefits for security, aside from avoiding the true nasal demons of deeply undefined behavior on reading uninitialized memory.

  1. Because this might be rolled back in the future, code that really cares will need to roll its own.
  2. Because in order to allow rolling it back in the future the value it's set to is unspecified, the potential still exists for non-zero values, bringing back the security issues of unpredictable data being read from uninitialized memory.
  3. Because it is being zeroed, code which (at the consumer-of-std-apis level) is guaranteed to never read uninitialized memory pays the cost anyway, and is thus incentivized to use unsafe {} (and at 20% cost, that's not going to be a short list)
  4. As a result of all of the above, we get:
    1. The code that cares about zeroing sees no benefit along any dimension
    2. The code that should care but doesn't handle the possibility might see a benefit, but that rug can be pulled out from under it any time (and the real benefit it needs isn't even part of the contract).
    3. Consumers will write more unsafe APIs, because std will impose a harsh performance penalty on anyone who doesn't do so. This gives up the aid of the compiler needlessly.

Basically, I see this as well-intentioned but misguided - in the pursuit of stronger security guarantees in specification, it'll result in weaker security guarantees in practice.

I'd suggest that straight unsafe {} should retain its current contract, and for more nuanced meanings of unsafe it might be worth introducing a sort of "tagged unsafe" construct - unsafe<values> for this kind of thing, unsafe<crypto> for finicky primitives, etc. (Syntax entirely open to bikeshedding)

@mahkoh
Copy link
Contributor

mahkoh commented Feb 24, 2015

I think @mahkoh's point about "relying on" this behavior is subtler than the point that was addressed in responses

I don't think there is anything subtle about it. It has all been spelled out in this issue but people keep ignoring or evading it.

Consumers will write more unsafe APIs, because std will impose a harsh performance penalty on anyone who doesn't do so.

People using unsafe will still pay this penalty. I've already said this above: There is no way to use the Read trait without paying this penalty (unless a new unsafe method is added.)

@eternaleye
Copy link

@mahkoh, I'm not saying the point is particularly subtle - just that it's subtler than the one that the answers addressed.

And yes, one of the places where it'd incentivize adding unsafe APIs is in the Read trait itself.

@l0kod
Copy link

l0kod commented Feb 24, 2015

This seems quite close to the RAII property.

Uninitialized memory can never be exposed in safe Rust, even when it would not lead to undefined behavior.

👍

This RFC should open the door for more general memory sanitization. cc rust-lang/rust#17046

@mahkoh
Copy link
Contributor

mahkoh commented Feb 24, 2015

This formulation is weak anyway as @aturon has later explained that it will be treated as UB. The real formulation should be

If uninitialized memory is passed out of unsafe blocks, the behavior is undefined.

@mahkoh
Copy link
Contributor

mahkoh commented Feb 24, 2015

Just realized another funny thing: Since unsafe blocks end at safe function calls but not at unsafe function calls, if this proposal is accepted, you can never change unsafe functions to safe because some users might pass uninitialized memory to them. If you change them to safe, the behavior becomes undefined.

@aturon
Copy link
Member Author

aturon commented Feb 24, 2015

Thanks everyone who has participated in this discussion so far. I'm reasonably convinced that the wording of the original RFC is problematic, and would like to take a step back and have a broader discussion about these issues as a community -- and hopefully hash out a policy that we can all agree on.

To that end, I'm closing this RFC in favor of a new internals thread. I hope to hear from strong proponents on both sides.

@aturon aturon closed this Feb 24, 2015
@carllerche
Copy link
Member

I have a few points to make

First, I am somewhat on the fence about this RFC, but leaning towards a 👍 To be explicit, I think that I am OK with:

Set an explicit policy that uninitialized memory can never be exposed in safe Rust

Maintaining this invariant is possible in std::io without zeroing out any memory. The implementation of Read::read_to_end() never allow access to uninitialized memory.

However, this RFC doesn't actually address rust-lang/rust#20314. The original issue says that Read::read_to_end() needs to be marked unsafe because implementations could be buggy. It's basically saying that any code that, given a bug in the implementation, could possibly expose uninitialized memory to the user needs to be marked as unsafe.

So.... If I misunderstood the original issue, please forgive me, but this request seems very unreasonable to me. Virtually every single type in the rust ecosystem can expose uninitialized memory to the user given a bug in the implementation. And yes, this means that bugs in the implementation could be exploited by an attacker to access secrets in memory like passwords etc... but this is true for all code everywhere.

So, to recap, this RFC does not require any zeroing out of memory in std::io. The original issue this RFC links to is not reasonable (as I understand it).

EDIT: I did in fact misunderstand the original issue. I have posted an update in the forum thread.

@reem
Copy link

reem commented Feb 25, 2015

The io::Read trait attempts to use an out-pointer for efficiency and to avoid double buffering - but this pattern is almost explicitly not supported in current Rust. The crux of the issue seems to be how you deal with incorrect implementations which read this out pointer instead of writing to it.

The flaw in this design is obvious if you try to generalize Reader and Writer to general types instead of just u8. It's obviously incorrect to pass an uninitialized &mut [T] to a function, but we sort of paper over this issue for the special case of T = u8, leading to this problem.

I think this special casing is actually incorrect, and just like with the placement new issues, we actually need more advanced language features to properly deal with this, particularly some form of &uninit or &out pointer. Without it, I don't see how this pattern can be made safe and generalized without implying the same O(n) overhead as with double-buffering.

An alternative fix/workaround for this specific case is for read to take a WriteOnly<[u8]> or similar wrapper which emulates &out, which allows only writing to the underlying type, with reading only allowed through an unsafe API or just forbidden outright.

@eternaleye
Copy link

@reem: Well, a "safe" option with a more forgiving contract might be to pass in an &mut Vec.

In order to be efficient, the callee trusts that there's at least some capacity reserved, and the caller trusts that read_to_end() will restrict itself to Vec::capacity().

But if the contract is violated, the cost is "Oh no, we had an annoyingly short outbuffer" or "Oh no, we allocated" rather than "Oh no, we read uninitialized memory."

Of course, that doesn't permit windowing a larger buffer for multiple reads like &mut [u8], so it's not a full solution (because you can't prevent a malicious Read impl from clobbering earlier data in the buffer, and can't "shrink" the buffer from the left either) - but I think that Vec::with_capacity() is probably a useful model for "How to do this right"

Heck, if someone makes a VecSuffix<'a, T> type, that'd possibly work out pretty darn well.

@codyps
Copy link

codyps commented Feb 25, 2015

@eternaleye yes, type wrapping (in some yet to determned way) is probably a sane solution. That said: I don't think we need to mix Vec in here, as read() shouldn't be triggering allocations. Knowing that read() is sometimes called by read_to_end(), which has a Vec, should be looked at as an impl detail. A less Vec-central solution for the particular need of read() that we could re-use elsewhere would be ideal.

They key problem with the type wrapping is that there doesn't seem to be a good way to say for the caller that "if x, then the data I passed in will certainly now be initialized", and have the type system check it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.