Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: make it possible to catch failed memory allocations #14162

Closed
rgooch opened this issue Jan 30, 2016 · 27 comments
Closed

proposal: make it possible to catch failed memory allocations #14162

rgooch opened this issue Jan 30, 2016 · 27 comments

Comments

@rgooch
Copy link

rgooch commented Jan 30, 2016

There's currently no way to safely allocate memory in Go without running the risk of the process panicing. This makes it very difficult to have an application that automatically scales to fit the machine (or container) resources. Further, for an application to determine how much memory is available for allocation from the OS is non-portable and unreliable, so it's not like an application can wrap allocations inside a function that checks to see if they will fit. The guess it can make about available memory can be under or over the real value, leading to wasted resources or OOM panics.

I realise that running defer() code when a memory allocation has failed could in turn fail, since the recovery code will also need to allocation memory. However, the common case is probably that a large allocation failed, and there is room for small allocations needed for cleanup, so just allowing applications to catch OOM panics would probably help most of the time. If there is another memory allocation failure during recovery, kill the application.

A more complete solution would be to reserve a chunk of memory that is freed/made available during OOM recovery, which is re-reserved once recovery is complete. That would also allow effective recovery if a small memory allocation failed. As long as the recovery code uses less memory than the reserved size, this approach should be robust. The memory would be allocated with the normal internal mechanisms, so it wouldn't be "special". The OOM handling code would be the only place where a reference was kept.

This reservation could be done implicitly, but it's probably cleaner to require the application to enable this feature. That would ensure that applications that don't need OOM avoidance don't have to pay the price of the reserved memory. I suggest the following API:
func runtime.ReserveOOMBuffer(size uint64)

The OOM handler would drop the reference to the reserved memory, call runtime.GC() and then generate a normal panic() rather than panic+kill. The recovery code in the application would be expected to call runtime.ReserveOOMBuffer() again (probably after calling runtime.GC()).

A less attractive option is to add trymake() and tryappend() built-in functions, which return a value,error tuple and a tryinsert() built-in which inserts into a map and returns an error value. I like this less because there are many other ways in which memory is allocated, so one cannot catch them all. It also requires changing a large number of callsites. Nevertheless, it would be better than the current situation.

@rgooch
Copy link
Author

rgooch commented Jan 30, 2016

Previous discussion on mailing list: https://groups.google.com/d/topic/golang-dev/wUEWhk2jtHM/

@minux
Copy link
Member

minux commented Jan 30, 2016 via email

@rgooch
Copy link
Author

rgooch commented Jan 30, 2016

Regarding the appearance of going backwards: it's not really the same as old-school memory management. Go would still be a fully garbage collected language, with all the conveniences that entails. Further, most people could just ignore the whole issue and keep playing the "I'm feeling lucky" game. Nothing changes for them.

Regarding making the runtime more complex: are you sure about that? What I'm suggesting is pretty narrow in scope. No "special memory" regions.

Regarding the emergency GC run: I haven't seen the proposal. If the suggestion is coupled with a recoverable panic(), so that the programme has the necessary feedback that memory is running out, this approach may help in many cases.

Regarding the panic handler not being able to do much to remedy an OOM situation: that is not correct. There are many classes of problems where there is a fairly high-level point at which an entire transaction can be aborted. For example, if I get a transaction request via an RPC handler then I would put the work which allocates lots of objects into a function and place a panic handler in there. The handler would drop the references to the new objects and call runtime.GC(). If the function returns a status indicating an OOM was caught, a failure status would be send back in the RPC reply, and I could even apply some back-pressure on requests.

It's essential to have an effective feedback mechanism when allocating memory. Without that, an application which dynamically scales its resource usage has no robust way to throttle.

@aclements aclements changed the title Catching failed memory allocations proposal: make it possible to catch failed memory allocations Jan 31, 2016
@aclements
Copy link
Member

(This may morph in to a runtime issue, but for now I think it's more of a proposal.)

@aclements
Copy link
Member

I understand the desire for a memory backpressure mechanism, but I see some issues with the proposed solution.

For one, modern Linux systems (and probably other platforms) tend to make this difficult at the OS level itself. In practice, mmap operations generally let you overcommit memory (in fact, we depend on this on amd64) and things don't fail until you attempt to fault too much memory in. But at that point, the OS doesn't have much recourse but to kill your process. It is possible to configure Linux to disallow overcommit via /proc/sys/vm/overcommit_memory, though that tends to interfere with unexpected things (it may interfere with our overcommit, though that may be fixable). But maybe you have to disable overcommit for systems that depends on memory backpressure in any language.

Second, it's unclear what should happen if the failing allocation happens to be in a system goroutine. System goroutines don't allocate much, but they do allocate. Even the garbage collector has to allocate a bit for its work lists. Performing an immediate STW GC when allocation fails handles this situation much better, since it's a global solution to the global problem of an application being out of memory. A similar implementation-related problem is that if an allocation fails on a user goroutine but in the runtime, it's often on the "system stack" and making it possible to unwind a panic from the system stack would be a huge effort.

Finally, even if your defer releases reference to the memory used by a transaction, that memory won't be freed until the next GC, so allocations will continue to fail. Do you perform a full GC both before and after handling the panic? Do you require the recovery handler to perform a full GC?

@rgooch
Copy link
Author

rgooch commented Jan 31, 2016

1: I often run systems with overcommit off, because I would rather have the (C) application "run out" of memory a little earlier than have it killed when it starts using memory it was promised.

2: If emergency stop-the-world garbage collection were to provide feedback to the application, this might be viable. If the feedback is in the form of a function the application can call to detect that STW GC was performed, it would have to be evaluated to determine how effective it is in practice. If there is not enough memory that can be garbage collected to fulfill the memory allocations before the next check, there will still be a panic. Some obvious examples would be a large make/append of a slice or a call to one of the encoders. On the other hand, if the feedback mechanism is to generate a recoverable panic(), it gives the application the ability to stop offending allocations even inside library code.

3: If you read my opening email, I stated that the application recovery code would probably need to call runtime.GC().

@minux
Copy link
Member

minux commented Jan 31, 2016 via email

@rgooch
Copy link
Author

rgooch commented Jan 31, 2016

I think you're making this more complicated than it needs to be. It's up to the application programmer to pick the size of the reserved memory. Well-written recovery code is going to take very little memory. All it should do is drop references to data structures, call runtime.GC() and then re-reserve memory.

Control over every single function call isn't necessary. It's a common pattern that there are a small number of goroutines doing "housekeeping", and one or more goroutines doing the heavy transactions, and those are the ones that need to throttle. The reserved memory allows the housekeeping code to continue. All we need is a way for goroutines (or functions) to register that they want to handle an OOM panic. The heavy allocation functions register this interest and no-one else does. When the initial OOM occurs, the functions that registered interest will get the notification and stop what they're doing (just like with panic()), and all the others will continue, eating a little from the reserved memory.

I think that we can do this with a "soft panic", which only affects those functions who have registered. If registered, the notification has the same behaviour as a panic(). One possible API is to have a defer-like statement (perhaps onnotification) that a function can use to register interest. Every function up the call stack that registered would get the notification. Another possible API is a failifnotified() function which would return a bool indicating whether a notification was received while running the provided function.

@aclements
Copy link
Member

1: I often run systems with overcommit off, because I would rather have the (C) application "run out" of memory a little earlier than have it killed when it starts using memory it was promised.

Interesting. Does Go have any problems running with overcommit disabled? It may be that large PROT_NONE mappings don't count against committed memory (which would make sense, but they do count against ulimit -v, which does not make sense).

2: If emergency stop-the-world garbage collection were to provide feedback to the application, this might be viable. ...

I don't think a recoverable panic is a technically viable solution for the reasons I mentioned in my comment (namely, allocations on system goroutines and system stacks), unless you're only interested in catching large failed allocations. So, what about the emergency GC approach? What sort of specific feedback would be useful to an application so it could scale to fit the resources in response to triggering emergency GCs? Would the latency of emergency GCs be a problem, or is taking a latency hit better than having the application die? Alternatively, could the runtime provide feedback or mechanisms that would be useful for staying under the memory limit, rather than trying to deal with the consequences once you've already run out (and is this only useful in a strictly single tenant system)?

3: If you read my opening email, I stated that the application recovery code would probably need to call runtime.GC().

I did read your opening email (how else would I have replied?). I have now re-read it a few times and I think your penultimate paragraph is proposing "yes" to both of my questions: that GC would happen twice during the recovery (both at the beginning and at the end), and that the second time would be triggered by the application recovery code after it had unwound and dropped references. Am I interpreting your proposal correctly?

@aclements
Copy link
Member

Looks like our replies crossed.

All we need is a way for goroutines (or functions) to register that they want to handle an OOM panic. The heavy allocation functions register this interest and no-one else does.

I see. I think this is how you're proposing to get around failed allocations on system goroutines as well (though it's still a problem for failed allocations on system stacks). Still, if the failed allocation happens on a goroutine that isn't registered for OOM panics, what happens? Do you pick a random registered goroutine to take the panic? Do you trigger the panic no matter where the goroutine(s) is/are, even if it's not in allocation?

I'll have to think about this a bit. I'm still interested in exploring alternative mechanisms.

@minux
Copy link
Member

minux commented Jan 31, 2016 via email

@randall77
Copy link
Contributor

Perhaps we're thinking about this the wrong way. What if instead of thinking about "what do we do at OOM", how about "how do we let the application figure out that it is getting close to OOM"?

I could see runtime functions something along the lines of currentMemoryInUse and maxMemoryAvailable. Heavy-allocating goroutines could check these two functions and backoff/fail the work instead of allocating.

Another possible API would be a "reserveMem(bytes int64) bool" call that tells the runtime "expect me to allocate this much". If the runtime thinks it will be too much, it can return false, meaning "probably not going to be able to allocate that much, please back off".

I'm less sure how one might implement those calls, especially the "when is the OS going to refuse my next allocation" question. It seems difficult in general, but we may be able to come up with something. Maybe when reserveMem(...) is called, we ensure that we can successfully map & commit the heap space required for that much memory?

@ianlancetaylor ianlancetaylor added this to the Proposal milestone Jan 31, 2016
@ianlancetaylor
Copy link
Contributor

CC @RLH @aclements

@rgooch
Copy link
Author

rgooch commented Jan 31, 2016

Oh, wow. So many points to reply to. Well, a healthy discussion :-)
I'll put all my replies into this one message.

1: I have not observed problems running Go code with overcommit disabled.

2: I'm interested in catching failures in large allocations as well as many small allocations buried in a library I don't control.

3: Emergency GC would probably work if I still got a notification that stopped processing in the goroutines I've registered and called my recovery code.

4: Latency of emergency GC is unfortunate but is better than the alternative (dying).

5: I don't think providing feedback when we're "close" to running out of memory is viable. It won't work on a multi-tenant system, and even on a single tenant system knowing how much is available is difficult and non-portable (even on Linux: think nested containers versus no containers). It's also a problem when allocations are done deep inside a library that one does not control. I think it's unrealistic to expect that lots of library code will be refactored to perform periodic checks.

6: Yes, my proposal is to GC at the beginning and the end of recovery.

7: Failed allocations in system goroutines are just like application goroutines that have not registered interest in recovering from OOM. The heavy allocators are the ones where there is a benefit to doing recovery (lots of allocations, OOM, recover, drop references and abort transaction). Everyone else should be able to get by with the memory that the emergency GC frees up. We rely on their cleanup to bring the system back from the brink of disaster.

8: A TryAndReserveMem(bytes int64) bool function has similar issues to (5): between when you ask and when you try to do a real allocation, the situation can change. There is the same problem with refactoring libraries. Further, it's difficult to know how much memory I'm about to use. Go allocates a lot of memory under the covers for housekeeping. That's really significant with large numbers of small allocations.

@RLH
Copy link
Contributor

RLH commented Jan 31, 2016

The GC will do whatever it can to avoid an OOM and as GC matures it will
get better and better. As this happens the application's chances of
recovery will become less and less likely.

Even today attempts to recover seem doomed to failure. For example, the
immediate cause of an OOM may be in a Goroutine that has little to do with
the root cause of the OOM. This means that the innocent Goroutine that
allocated the final object that caused the OOM would have to do global
reasoning about the cause of the OOM and then have to correct the problem
locally. Cascading OOMs across Goroutines only make reasoning about global
properties and maintaining coherency less likely. Algorithms with the
ability to reason globally and react locally are complex and hard. The most
obvious example of such an algorithm is the GC itself which has to reason
about the global property of reachability. Getting into an arms race with
the GC about how to manage memory will waste a lot of energy that could be
spent articulating and understanding real world use cases that can be then
used to improve the GC for everyone.

On Sat, Jan 30, 2016 at 6:53 PM, rgooch notifications@github.com wrote:

Regarding the appearance of going backwards: it's not really the same as
old-school memory management. Go would still be a fully garbage collected
language, with all the conveniences that entails. Further, most people
could just ignore the whole issue and keep playing the "I'm feeling lucky"
game. Nothing changes for them.

Regarding making the runtime more complex: are you sure about that? What
I'm suggesting is pretty narrow in scope. No "special memory" regions.

Regarding the emergency GC run: I haven't seen the proposal. If the
suggestion is coupled with a recoverable panic(), so that the programme has
the necessary feedback that memory is running out, this approach may help
in many cases.

Regarding the panic handler not being able to do much to remedy an OOM
situation: that is not correct. There are many classes of problems where
there is a fairly high-level point at which an entire transaction can be
aborted. For example, if I get a transaction request via an RPC handler
then I would put the work which allocates lots of objects into a function
and place a panic handler in there. The handler would drop the references
to the new objects and call runtime.GC(). If the function returns a status
indicating an OOM was caught, a failure status would be send back in the
RPC reply, and I could even apply some back-pressure on requests.

It's essential to have an effective feedback mechanism when allocating
memory. Without that, an application which dynamically scales its resource
usage has no robust way to throttle.


Reply to this email directly or view it on GitHub
#14162 (comment).

@rgooch
Copy link
Author

rgooch commented Jan 31, 2016

Let's posit that the GC is perfect: every time an object is no longer referenced, it is freed immediately and thus available for reuse. In this case, my original proposal to reserve a buffer for use during OOM recovery makes recovery viable. Functions which expect to do large allocations register an interest in being notified that we had to eat into the emergency buffer. If any allocation fails and the reserved buffer is sitting there, the buffer is dereferenced and freed, notifications of "we are running on reserve power" are sent, and normal processing continues. The application can decide how big the reserve buffer is.

My proposal deals with the scenarios you describe.

Note that my proposal is not an arms race with the GC. The underlying issue is not with the behaviour of the GC. The underlying issue is the lack of a backpressure mechanism. The core problem is that the application has no way to find out that memory is running low and no way to effectively react (stop all allocations which are likely to be large).

@beoran
Copy link

beoran commented Jun 30, 2016

Fundamentally, the whole point of a garbage collector is that the programmer doesn't have to worry about allocating nor freeing memory. The problem is that a garbage collector can't do anything sensible when it runs out of memory except trying a sweep. If that fails, there's no way to decide what to do since the allocation that failed could have come from anywere.

The only way to avoid this is to allocate and free the memory manually. I could imagine the unsafe package gaining a few extra functions to do this, such as unsafe.Allocate(), unsafe.Free().

@davecheney
Copy link
Contributor

Fundamentally, the role of a garbage collector is to present to the programmer the illusion of an infinite free store.

If that illusion is broken, is seems reasonable to abort the program.

On 1 Jul 2016, at 08:42, Beoran notifications@github.com wrote:

Fundamentally, the whole point of a garbage collector is that the programmer doesn't have to worry about allocating nor freeing memory. The problem is that a garbage collector can't do anything sensible when it runs out of memory except trying a sweep. If that fails, there's no way to decide what to do since the allocation that failed could have come from anywere.

The only way to avoid this is to allocate and free the memory manually. I could imagine the unsafe package gaining a few extra functions to do this, such as unsafe.Allocate(), unsafe.Free().


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@beoran
Copy link

beoran commented Jun 30, 2016

Agreed there. That is exactly why I would propose non-gc memory management
as a workaround for situations where this illusion is undesirable or even
dangerous.
On 1 Jul 2016 12:57 am, "Dave Cheney" notifications@github.com wrote:

Fundamentally, the role of a garbage collector is to present to the
programmer the illusion of an infinite free store.

If that illusion is broken, is seems reasonable to abort the program.

On 1 Jul 2016, at 08:42, Beoran notifications@github.com wrote:

Fundamentally, the whole point of a garbage collector is that the
programmer doesn't have to worry about allocating nor freeing memory. The
problem is that a garbage collector can't do anything sensible when it runs
out of memory except trying a sweep. If that fails, there's no way to
decide what to do since the allocation that failed could have come from
anywere.

The only way to avoid this is to allocate and free the memory manually.
I could imagine the unsafe package gaining a few extra functions to do
this, such as unsafe.Allocate(), unsafe.Free().


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#14162 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AAEWeRNkmh5bPXlQ1QwVKtaMjeaD0dPWks5qREmsgaJpZM4HP1lM
.

@rgooch
Copy link
Author

rgooch commented Jul 9, 2016

I don't see how one can write robust applications which scale to fit their resource constraints if one insists on maintaining the illusion of an infinite free store. Aborting the programme when an unknowable memory limit is reached is a terrible experience. Is no-one else here writing code which tries to make the most out of available resources (particularly memory)?

Right now one strategy we have for avoiding an OOM panic is to over-provision the VM/container. If we have a workload that has a steady-state requirement of 30 GiB, we need to provision at least 50 GiB so that we can handle demand spikes. We need significant headroom since we call other libraries we don't control which could allocate a lot of memory. We periodically check our memory consumption and call runtime.GC() to help out and if that doesn't work out we start shedding load ("we're full, come back later"). Determining the required headroom is basically measurement-based guesswork. We pick a number for the headroom, run the load for a few hours or a day or so and if there are no OOM panics, declare "victory", otherwise we try again with a higher number.

This approach is not one we're happy with. We're spending a lot of extra money on this "safety headroom". For many of our workloads we need to be prepared to shed load anyway, and that's just fine. And since we need to be able to shed load anyway, we'd be much better off being able to catch OOM panics so that we can abort a request rather than abort the whole programme.

If someone has a better idea for dynamic workload management which doesn't require massive over-provisioning of resources, I'd be quite interested.

The suggestion for unsafe.Allocate() and unsafe.Free() doesn't look like it would help, since we simply don't know with certainty how much memory a particular request will require. It depends on the request data and is not easily computable.

@aclements
Copy link
Member

/cc @matloob, who is investigating handling memory-constrained situations and load spikes by giving the application more visibility into and control over GC pacing (not by catching OOMs, which I still think is not a technically viable option). The rough idea is that if we can give the application tight-loop control over the GC's heap size target, it can observe its own memory usage (and anything else that might affect this, like other processes in the same container), shed load if it gets too tight, and dynamically trade off more CPU for lower memory overhead by making GC more aggressive.

@rgooch
Copy link
Author

rgooch commented Jul 9, 2016

If I could get a recoverable panic when memory use hits a pre-defined limit, that would help. This would work OK for applications which are intended to consume most of the memory of a VM/container. This would best be coupled with a way to safely allocate (and dirty) memory until full, then free that memory and report back. That would allow an application to know how much memory is available in the VM/container and then it could set the limit based on that (i.e. 95%). Suggested API:

runtime.GetAllocateableMemory() uint64
runtime.SetSoftMemoryLimit(maxBytes uint64)

GetAllocateableMemory() should come with a warning that it will keep allocating memory until failure, can cause massive swapping (then again, only the foolish run with swap:-) and will throw out your page cache.

@rgooch
Copy link
Author

rgooch commented Jul 9, 2016

Note that I keep harping on about recovering from a panic because I can't just sprinkle all code with TellMeWhenImRunningOut() calls, because I don't control all the code that I call. I need an out-of-band way to abort the current callchain and give up the transaction at a high level.

@RLH
Copy link
Contributor

RLH commented Jul 9, 2016

OOM is a global property. The application could have a goroutine that wakes up periodically, checks some status, and if all is well go back to sleep for an application appropriate time. If there is memory pressure then back pressure, such as not accepting any new requests, could be applied.

@dr2chase
Copy link
Contributor

dr2chase commented Jul 9, 2016

How would you feel about, say, registering a channel to receive
notifications about memory use at some well-defined time with respect to
garbage collection? (This is my own bright idea, not necessarily endorsed
by Richard or Austin). The intent is to give you some relatively accurate
record of memory use over time, rather than letting you know too-late that
you're in a bad way, and you would use this interface to control how much
work you allowed into your server, and/or would modify the amount of memory
overhead (indirectly, the GC rate).

On Jul 9, 2016 2:37 PM, "rgooch" notifications@github.com wrote:

If I could get a recoverable panic when memory use hits a pre-defined
limit, that would help. This would work OK for applications which are
intended to consume most of the memory of a VM/container. This would best
be coupled with a way to safely allocate (and dirty) memory until full,
then free that memory and report back. That would allow an application to
know how much memory is available in the VM/container and then it could set
the limit based on that (i.e. 95%). Suggested API:

runtime.GetAllocateableMemory() uint64
runtime.SetSoftMemoryLimit(maxBytes uint64)

GetAllocateableMemory() should come with a warning that it will keep
allocating memory until failure, can cause massive swapping (then again,
only the foolish run with swap:-) and will throw out your page cache.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#14162 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AB1vJ7EiT0fXBtdl79g68b6h63d4CWj_ks5qT_h1gaJpZM4HP1lM
.

@rgooch
Copy link
Author

rgooch commented Jul 9, 2016

Having a goroutine which periodically checks memory use, or which listens on a channel for memory warnings doesn't do me a lot of good because I need to kill off the code that's doing the problematic memory allocations.

While it's true that OOM is a global property, it's usually the case that transaction processing code is where the bulk of allocations are being done, so it's that code that needs to be aborted/limited. This is why the panic on soft memory limit exceeded approach can help, while other notification mechanisms would not help. It's not a real OOM, it's just an early warning that soon you will be in trouble. runtime.SetSoftMemoryLimit() would affect only the calling goroutine. If you architect your application so that transaction processing is limited to specific goroutines, and those goroutines have asked for the early warning panic, they can recover from those panics by aborting the transaction and hence freeing up memory. Everyone else runs as normal, since there is still memory available.

In a way, this is similar to my original suggestion of a memory buffer that's set aside and only used when there is an OOM panic, but it's probably simpler and cleaner to implement in the runtime. A key difference from the user perspective is that with the soft limit panic, you need to set a limit, rather than keep allocating until you actually run out of memory. It's not as automatic, but you can make do with this approach. It would be vastly better than the current situation.

@bradfitz
Copy link
Contributor

I think we're going to close this specific proposal of a solution in favor of #16843 which is tracking the more general problem.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests