Skip to content

Latest commit

 

History

History
878 lines (472 loc) · 175 KB

feb-01.md

File metadata and controls

878 lines (472 loc) · 175 KB

1 February, 2023 Meeting Notes


Remote attendees:

| Name                 | Abbreviation   | Organization       |
| -------------------- | -------------- | ------------------ |
| Andreu Botella       | ABO            | Igalia             |
| Ujjwal Sharma        | USA            | Igalia             |
| Ben Allen            | BAN            | Igalia             |
| Nicolò Ribaudo       | NRO            | Igalia             |
| Philip Chimento      | PFC            | Igalia             |
| Aditi Singh          | ADT            | Igalia             |
| Romulo Cintra        | RCA            | Igalia             |
| Waldemar Horwat      | WH             | Google             |
| Shane Carr           | SFC            | Google             |
| Santiago Diaz        | SDZ            | Google             |
| Frank Yung-Fong Tang | FYT            | Google             |
| Ashley Claymore      | ACE            | Bloomberg          |
| Daniel Ehrenberg     | DE             | Bloomberg          |
| Rob Palmer           | RPR            | Bloomberg          |
| Peter Klecha         | PKA            | Bloomberg          |
| Michael Saboff       | MLS            | Apple              |
| Dave Poole           | DMP            | Apple              |
| Josh Blaney          | JPB            | Apple              |
| Kevin Gibbons        | KG             | F5                 |
| Michael Ficarra      | MF             | F5                 |
| Richard Gibson       | RGN            | Agoric             |
| Chip Morningstar     | CM             | Agoric             |
| Yulia Startsev       | YSV            | Mozilla            |
| Eemeli Aro           | EAO            | Mozilla            |
| Daniel Minor         | DLM            | Mozilla            |
| Jordan Harband       | JHD            | Invited Expert     |
| Kristen Hewell Garrett        | KHG            | Invited Expert     |
| Sergey Rubanov       | SRV            | Invited Expert     |
| Duncan MacGregor     | DMM            | ServiceNow         |
| Duncan MacGregor     | DMM            | ServiceNow         |
| Chengzhong Wu        | CZW            | Alibaba            |
| Tom Kopp             | TKP            | Zalari             |
| Linus Groh           | LGH            | SerenityOS         |
| Istvan Sebestyen     | IS             | Ecma International |
| Luca Casonato        | LCA            | Deno               |
| Ben Newman           | BN             | Apollo Graph, Inc  |
| Ron Buckton          | RBN            | Microsoft          |
| Chris de Almeida     | CDA            | IBM                |
| Daniel Rosenwasser   | DRR            | Microsoft          |
| Justin Ridgewell     | JRL            | Vercel             |
| Willian Martins      | WMS            | Netflix            |
| Pieter Ouwerkerk     | POK            | RunKit             |

Async Context for Stage 1

Presenter: Justin Ridgewell (JRL)

JRL: AsyncContext for Stage 1. Remember from the last one this was presented in 2020, there’s not been a whole lot of changes in the core functionality. The same kind of API surface area that we’re trying to go for, it’s still here. We updated the API and the actual methods we’re implementing. We have updated our explainer. Hopefully this attempt will be a little bit better explaining the use cases and certain things that we cannot successfully implement without something like this feature. We’ve collected use cases from clients, from servers, from browsers themselves for features they want to implement in devtools and the web APIs and something like that and chatting with node. We have been able to figure out how to make nodes implementation of AsyncLoclStorage even faster and hoping we can land that node at some point in the future. We’ve been chatting with the SES folks to figure out what security concerns were introduced by the API and if we could mitigate those in some way. We have done several meetings with SES and we have an implementation. We’ll get to that hopefully a little bit later in the bonus slides we have working implementation in node that won’t use async_hooks or AsyncLocalStorage. Considerable amount of legwork has gone on since the last time this was presented.

JRL: Let’s start with the use case. I work for a platform company. We allow you to run your code on our servers and we sit in-between the engine and the developer. So we are not the engine and not implementing C++ features and can’t do anything super special. We have the ability to run JavaScript first. That’s what the platform does. We’re not the developer. We can’t control every little thing the developer is going to do. All we do is sit in-between the two and provide integrations into our product and provide additional APIs to make the development experience more pleasurable. One of the things we want to do is augment the console.log function. The use case simplified what we want to do is more complicated than this. I distilled this down to something that is easy enough to demonstrate in slides and I can talk about it without it being so in the weeds what we actually want to do. The good thing is it hits the same bugs. We have to solve the same problem in order to do this. Essentially augment console.log and when you do console.log we will print the number of – the elapsed milliseconds since the request started. And that gives us a little bit chronological timing of the console log and figure out this console log happened 5 seconds after the request start and something is probably wrong and we want to address that. We act as essentially the engine’s listener and register a web socket or register listener for serveWorkers and something like that and call into developer code and the developer code can do whatever they want in response of the request. That’s kind of the set up here. In order to augment console.log one of the first things we can do is patch console.log and the first parameter and that is the start.

JRL: And this is one of the things suggested the last time this proposal was presented is that should change your function to accept your parameter that you need. This doesn’t work for us. console.log is a standardized API and we can’t modify it. Anyone using console.log as a standard isn’t going to be calling it properly now. They will be unexpectedly broken. In this case, they’re probably not going to update their console.log calls to pass in the start time. Either they can’t propagate the start time throughout the entire callstack or just forget about it or maybe using third party code that is calling console.log as a standard and it hasn’t been updated. They don’t control the third party code even though it is depended on. This introduces an extreme burden to the developer. So we can’t just change console.log. Maybe we could introduce our own log function. So instead of doing console.log you have to call in the globally exported log function or something like that. The first parameter to the log function would be the start time. Except we haven’t really solved the burden. The developer still has to update all of their code in order to propagate the start time from throughout the code base. That is difficult at times and impossible at other times and they still have to remember to call our log function instead of the console.log function standard. If they’re using any third party code using the standard console.log what do they do here?

JRL: Essentially adding our own API with the parameter that we expect also doesn’t really work. The only solution that we can really come up with is to patch console.log but not require a parameter. Instead of having a parameter, we hoist a variable out of the console.log function and then we just manipulate that console.log function. In the request handler at the start of the integration essentially line 3 we set the start time stamp. Then we just call into the developer code. Developer code doesn’t have to worry about it. They didn’t have to update and propagate the start time throughout the code base and call the standard console. Console.log and it works. Whatever they call the console log and figure out the start time initialize when the request happened and read that with the console log output.

JRL: We can formalize this. Why we do this is apparent later. Instead of having the start timestamp and manipulate it directly we can have the class that abstracts the ability to manipulate a global variable essentially. Sync context notice I’m stressing sync for synchronous and sync context operates as kind of a stack. We are able to call context.run initializes and sets a new value to the current sync context instance. And at any point during the run – the call back that I pass to run, I’m able to call sync context.get and able to access the value that I set during the run. And you can have multiple sync context instances. You can run the same sync context instance multiple times in a nested fashion, however you want. You will be able to access whatever the value was last set for this sync context by calling get. This API is a little bit convoluted and more straight forward ways to implement this for sync context. I will draw a parallel later on. It’s important that we have something like this structure. If you have any questions about how this works, hope hopefully this will make it clearer. All that’s changed since the previous attempt to augment console.log is essentially line 3 and line 6. Instead of having a start variable, that was hoisted out and manipulated the start variable and accessing the start variable we have a state which is a SyncContext. On line 3 I’m calling state.run. Setting it to the current time stamp value. Then I’m passing it to handler function and invoke the handler function. While it is being invoked the date.value time equals one or two will be inside the sync context. If I call asyncContext on line 6 I will be able to access the instance that I set on line 3. Again, the developer code does not have to change. They don’t have to propagate the start time and don’t have to call the special function, they don’t have to worry about anything. Essentially they do normal JavaScript stuff and I’m able to seamlessly augment the way that console.log works so it works better on the platform.

JRL: But there’s a problem. This only works if code is synchronous and if the server is only synchronous it’s not much of a server anymore, not really useful. If any asynchronicity happens in the call stack or developer’s function or some other function they call into, we will lose the sync context because of the way that the run function is structured as soon as synchronous execution of the call back is complete that will happen as soon as the await happens on line 12 before the await has settled and before we progress with the rest of function the synchronous context run will have restored the old state into the world. So if you were to run through this code currently, state.get on like 6 is going to be undefined. The synchronous execution where we set the run value is complete. We pop it and then at some later point this promise settles and execution continues for this doStuff function. When it later resumes and tries to do anything, then it’s an asynchronous context that happens after that run synchronous context and so it’s just completely broken for this use case. So this really – this is almost a solution. And it’s only a solution if you’re only completely synchronous which we can’t depend on.

JRL: So we need something, a slight modification. We need something that can preserve a context beyond an async…await boundary or promise.then boundary. In this case when we go to line 2 here and do context.run and do this first invocation of the FN function. I’m setting a value of one into the context and also passing a parameter of one into this function. I can only do this because I own the function and I can nullify it the way that I want to. But if we go through and we execute this context.run and we jump into the function, obviously based on the explanation I have just given in synchronous context this first line 6 will work. The context will hold a value of one and expected parameter is one. The first assertion will work out. But then we hit line 12. We hit an asynchronous boundary. We pause the execution of this function. And so now we’re done. The sync context goes back up to line 2 in this context restores the prior state and now if you’re to reason sync context.get it’s undefined and now we go back and continue to line 3. We want to re-run the same function. This time we’re setting a “to” value into our context. We’re passing a “to” value as our expected parameter into the function. If we execute function now, obviously this first assertion on line 6 asserting that the context current currently holds a two value and our expected value is also two, that will work out. But we hit another promise. Maybe it’s the same promise. Maybe it’s a different promise but it’s just a promise. So the synchronous execution of this function again be becomes paused. We are done. We go back up to line 3 and continue on with the rest of the program. Nothing else executes. At some point in the future, those promises are going to settle up. And so we jump back into line 12. Well, for the first execution after line 12 we need the prior state to be restored. In this case we need the state of one to be restored into our context variables so that after this await happens on line 14, if we restore that one value into context.get this assertion will correctly work. Obviously expected is still one because it’s a closure wrapped parameter. But we need context to be integrated with this resumption of our synchronous execution. During the first innovation when the promise resumes we need to restore the 1 value. During the second execution when ever that promise resumes, we to restore the 2 value into the context. That’s the only way this sort of API that we need will work out for us.

JRL: So we can formalize this again. This is actually almost the same code that I presented to you earlier. The only change on this slide is on line 2. I changed the binding, the name of the class. That’s it. This code works if we have language integration. And we actually already have the points to do this language integration. We have two hooks. One call called HostMakeJobCallback and one call HostCallJobCallback which are called host make job call back is hit before you hit await. It’s the thing that preserves the current state of the world so that you can pause the execution of the a asynchronous function and put it to the job queue somewhere. And when you’re ready to resume the call back that execution, you call HostCallJobCallback. And that restores the state of the world and allows you to continue execution of the asynchronous function. If we just integrate the async context into those two host we have the ability to snapshot the current state of the world for our AsyncContext. We’re able to snapshot the storage global variable here which is the thing that holds the value of all AsyncContexts. And when ever HostCallJobCallback is called and when we’re ready to resume after promise settles, we can restore the state. We can put whatever was snapshot and make job call back back into the global storage value and now any asyncContext will refer to the snapshot values and appear as if everything worked.

JRL: So with a slight modification here, the only modification being on line 1 AsyncContext we’re able to fully support any developer code, whether it’s synchronous or asynchronous and I’m able to completely augment console.log without forcing the developer to make any changes in their code base. They just work seamlessly for them and get the thing that they – they just call the correct standard console log function and because asynchronous context was preserved through both synchronous and asynchronous flows of code, then nothing has changed, nothing special needs to happen. Just line 6 will work based on whatever value I put into there on line 3. So I’m able to augment it completely without any developer – any work by the developer.

JRL: You might ask couldn’t you do this yourself? It’s a little difficult. We can almost do it. As a platform, we can patch Promise.prototype.then. As part of the then function we can do that snapshot and restore. I can wrap on line 14 and 15 and wrap the call backs called at some later point and snapshot line 6 through 10. They snapshot the current global storage value all in user lens. When ever the call back is finally invoked later on by the engine I can restore the global storage value. Except, this breaks very quickly. This only works if you use promise.then. If you use native async await it’s broken and impossible to hook into the resume step of an async function. As soon as the await happens you’re in a brand new tick and there’s no way for you to hook into it. At least one tick will happen since the last time user code has been able to run and you were able to control. So we would get into a broken state. So it’s just impossible to use native async await. If we want to force this to happen in userland. That’s not a great experience.

JRL: But it does introduce us to the final kind of API that we need to support. Maybe you had a WebSocket. Maybe you have a ServiceWorker. Maybe you have an event listener or something like that and you wanted to treat it as if it were asynchronous continuation like a promise even though it doesn’t actually hook into the host machinery the same way a promise does. For like a WebSocket maybe you have a request and response flow to some server and you want to snapshot the current state of the world when you make the request to the server and you want to restore the state when ever the server responds. There’s no way for the host to do it for you, the web browser or whatever to do it for you. You have to manually implement this kind of wrapping so that the context is propagated correctly for you. We also have an wrap static method that allows you to do that. It just takes the call back and it will snapshot the current state of the global storage value and when ever you call that wrap function, it will restore the global state of the async context storage so that the invocation of this call back will be able to see whatever the state of async context was before that request happened, before the web socket Ping happened, whatever it is.

JRL: wrap essentially is the same thing as a run with the modification that it’s being deferred. Instead of invoking the run call back immediately as run does, it wraps it and allows you to defer the evaluation of this happening later on. We’re reusing the same essentially line 7 and line 14 the same run function which is a private internal function of the async context that allows it to actual actually do the manipulation of the storage variable and the restoring of the previous storage value after the call back is completed. But it’s not just server platforms that can make use of this. We also have use cases for browsers – for clients that are running in browsers and client frameworks, for telemetry where we’re trying to collect data how the application is performing. For instance maybe you have the framework and it wants to provide you with the ability to calculate your INP value Interactive to next paint that is a brand new core that is being introduced. All it does is calculates the amount of time since you perform an action until some browser paint happens like you change the HTML on the page and restyle something, something happens after some inter action. Well, the frame framework could allow you to do this using the async context we could set up a timer before in the framework code in the event listener or whatever sets up event listener in the framework on line 3 could set up a timer context and initialize it with the current time and whenever this click event is happening and call into user code. The user code is whatever the event listener is. At some point if the future maybe they finally call the patch function provided by the framework and give it new data they want to render into the DOM tree and when the framework decides it wants to paint the value into the DOM tree they could get the current value of this timer. This example similar to the server use case but applicable also to running on a browser in a client on some framework.

JRL: In that there are libraries that are trying to do there today. There’s a library call call called OpenTelemetry and it is elapsed time from the start event to end event and give gives you more ability like the ID, the span ID and the trace ID and the parent span, all of the other functionalities that come with it. But OpenTelememetry used all over the place has to implement the own Zone.js implementation context. It doesn’t know what the user code is going to do. It needs to preserve this state through asynchronous boundaries the same they way that I needed to preserve the boundary in my user code. Even libraries that are out there today need some sort of implementation like this in order to completely – to fix some of the long-standing bugs they have in order to support telemetry and performance and things like that.

JRL: There are other use cases including use cases for browsers themselves for features that they want to implement. Chrome just came out with a new create task API on the console object. And it allows you to capture the current stat at the moment of the task creation. On line 2 captures the snapshot how to get to the task point and allows you to run the task at some later point invoking the call back on line 5 and stitches together this creation context with this run context in your DevTools and this is essentially AsyncContext and captured the stack trays at some point and some later point I want to get that value. That is essentially an async context. We actually tried to implement an async context using create task. But it doesn’t expose its API to the user. This only happens in developer tools. You can’t get the creation task through any JavaScript API and funny how they just introduced an API that essentially operates in the same manner. There’s also a new API called web scheduling API or the task attribution – sorry, that’s another part. There’s a scheduling API that’s being introduced that allows you to schedule work to be happened as either high priority or as a background thing. So maybe we want to create a task that will do some background work. It will continuously Ping the server to deliver telemetry and delivering the server worker cache so it is cached and available offline on something. We want it to happen in the background and don’t want it to affect the user’s ability to operate with the page in any way. We don’t want to hog the networks if they had an image download or something like that. With the task prioritization APIs we could do that. They could post a new task with a background priority and our task could do what it wanted. It could do synchronous actions immediately and do a asynchronous actions in a task. The current task priority of background could be pre preserved throughout this entire task function regardless of whether it is synchronous execution or a asynchronous execution it will be able to propagate the current priority forward. So this is a browser feature that actually needs async context.

JRL: That is it. I have bonus slides. If we have questions, I want to take those now.

KG: This I guess is not something which necessarily need needs to be resolved before Stage 1 but I think is help helpful for clarifying the mental model here, which is that - so my understanding is that what you want is something like if I register an event Listener within the async context and did like run and then within the run, you know, I set my context and then I registered an event listener and then I exit the body of run, so presumably if the listener is like click and someone clicks on the page at some subsequent page and does the event listener dispatch that way, the context that you should see is in the body of the listener code is the one that I registered; is that right to start with?

JRL: Yes and no. It’s actually the expectation that I had. But Node’s implementation does not work that way. They are async local storage and you have to manually preserve the context if you want it. That is one of the reasons we exposed the async context context.wrap static function so when you’re register registering the call back as a framework or something you could automatically snapshot the current async context state. When you invoke the call back and when the click actual actually happens, that context would be restored for you by the framework automatically. So this is a point that we will definitely need to discuss once we get to Stage 1 or 2.

KG: I see. So I guess I have two follow ups. The first is – so for node, it does at least capture automatically if you are using await presumably.

JRL: Yes. await, setInterval, setTimeout queueMicrotask. The things that are obviously continuations of the same task but not event listeners.

KG: I see, So my second question was towards – just getting the mental model for your mental model or if you want event listeners to automatically inherit the current context or the context in which they were registered, would that still be true with synchronous dispatch and do dispatch event that doesn’t trigger the event to be runed on the subsequent micro task tick or turn it executes right there and like a function call, which context would that be, the one in which the listener was registered or the one in which the dispatch –

JRL: It depends on how you set up the registration. So by default if you were to not wrap your handler you would get the context or whatever the dispatch happened happened. If you wrapped your handler before you called at event listener or maybe as a framework would do it for you automatically you would get what the context was at the point of registration.

KG: Okay. Those are coherent answers. All right. That’s all I had.

DE: Yeah, although those answers are coherent I’m not sure if that’s the way this will end up. I think when this was previously discussed, sometimes there were ideas about automatically wrapping some things, I believe that the prototype that’s in Chrome already does some of this automatic wrapping. But I’m not sure of all of the details. Ultimately every single API that takes a callback whether event or not has a choice between two main options. Whether they’re going to wrap the callback or whether they’re going to invoke it in the context it inherits. So I think different events might want to use different semantics. There’s one particular event which will use very special semantics and that event is unhandledrejection, issue. So we that that event should trigger probably in the context of where the rejection was made. But that’s not when the function is called because we have to wait until the micro task queue is flushed until triggering those handlers. At which point won’t be any real thing. There has to be a third type of saving and restoring context. Ultimately there’s a hope among – like, when I talked to Justin about this proposal, he was saying, you know, it’s always a pain to figure out what the right context is to put in, this is kind of a source of churn in other platforms that try to achieve something similar and I don’t want to pretend this is obvious or simple but these are inherent semantic differences that just have to be thought through on a case by case basis for thing things that take call backs.

JRL: This is definitely one of the things we will discuss for Stage 1 and event listeners and how to do and pre preserve context and doesn’t preserve context and the option. Not sure what the final API will be for this or unhandledrejection. We have an issue for it. It’s number 16 on the issue tracker. So that we can discuss how this should be implemented correctly in the specification.

DE: In particular, that issue is not just about specification the way the specification is written. But actually a possible change in semantics versus how node async local storage works. But which we as well as the node community are kind of reasoning would be a positive change. So expect to hear more about that in plenary. For better or worse.

JRL: Let’s go on to SYG.

SYG: Thanks for all the leg work you did for finding the other kind of async propagation APIs that web platform and Chrome are doing. I’m convinced of the utility here. You’ve done a good job of showing the use case and everyone I’ve talked to is pretty convinced that something that is mechanically similar to RPC tracing is very important and has been asked for a long time. Originally, we couldn’t really do it. Folks in V8 and Chrome thought it would cost too much and whether that’s run time performance or memory overhead. Because of the work that folks have been doing for API APIs like task attribution (https://docs.google.com/document/d/1_m-h9_KgDMddTS2OFP0CShr4zjU-C-up64DwCrCfBo4/edit) and stuff like that, I think those concerns are mostly addressed. We’re not entirely happy with the implementation in Chrome right now. I believe there’s more room for improvement but not like categorical and can’t do this like it was before. I think chances are good we can implement something like this.

JRL: This is actually my bonus slides.

SYG: Okay, cool. But that said, we did – the V8 team discussed some of this proposal and the implementation and performance ramifications. We did still have some concerns. The most relevant one is probably – are you allowed to have unbounded number of async contexts to be propagated?

JRL: Yes. So you could create a million instances of async context and run them all at the same time in a nested call back or something. One calls the other that calls the other that calls the other. You could have a lot, yes.

SYG: We have concerns there about the unboundedness.

JRL: Fair enough.

SYG: Maybe for performance but mainly for lifetime management. We think to support an unbounded number you would end up with some kind of like giant weak list so as to not have it live forever while some task holds on to it and never needs it. Like, we need some kind of global weak list and garbage collecting giant weak things tend to add significant pause to pauses because they have special handling. So that’s something that maybe we can talk about in the design. Do we need the express of unbounded thing? Could we have an unbounded thing?

JRL: So sounds like the map storage we might need to limit the total number. There are some interesting cases because I’m using essentially what is immutable map, there are some automatic garbage collection APIs that could be implemented on top of it to hopefully alleviate some of the concern. This is something we can definitely explore as part of implementation in Stage 1 or 2.

SYG: Yeah, for sure.

DE: SYG, to be clear, you’re asking about the number of async context variables that are in parallel or like the depth of the run stack?

SYG: I’m asking about when we do this propagation when the task is registered, what are the – because right now the number of things that need to be propagated on a task are scalar things like the task attribution ID or something like the back up incumbent which is just the pointer but if it becomes user programmable and we need to send a linked list of a bunch of async contexts, that’s the thing that we’re afraid of needing custom weak iteration and something like that.

DE: I hope it would be propagated in a single variable.

JRL: It is just a map and hopefully a WeakMap. I can’t clone a WeakMap in user land.

DE: I’m wondering why this has to be weak. Of course, we can make cloneable WeakMap in userland with finalization registries. I guess the point is that has its own costs. Do we expect to be cases where someone creates an async context and does a run inside of it and then doesn’t ever access the variable? That would be the case where the collection occurs. But –

SYG: If you need async context in some of the call backs and not all of it, I mean, is the – like is there a simple implementation for – it seems somewhat analogous to the context splitting problem of like not in training a variable in all all of our closures, all of your inter functions in the same function that creates all the closures. You don’t want to propagate this thing on all the call backs because you have an async context but use it in some of them so we were thinking to actually cut the lifetime down in case a lot of stuff gets entrained in the async context we need to have some kind of weak logic built in.

DE: I think this comes down to use cases. Whether or not that pattern will occur. Use a smaller number of async context generally if you have an application that needs to track async things you should be using one, I think. More complicated structures within that.

SYG: So I agree with that take completely and that’s why we’re also wondering if that’s the expectation and user can do their own muxing in user land can we just implement the number.

DE: It’s hard to understand what kind of API we use for that. In general we should work towards APIs that are un compositional and having a global maximum is kind of weird. If we have the variables are independent from each other at least I feel pretty strongly should be more than one variable. They can just operate independently. For example, those two ones that you already discussed that are in the browser already.

SYG: Right. But the core difference here is this is user program programmable and we need to think through that. This is obviously not a block. I support it. I think this is good, the ecosystem obviously needs this. But this is just the performance concern to throw out now that we want to kind of get ahead of instead of running into some big performance cliffs later. That’s all.

DE: Thanks for explaining. That seems like a good thing to keep investigating.

RPR: We got through both of your items, DE?

DE: Yep.

MM: Just a note to SYG to follow up with offline and to everyone interested in implementing this and trying implementation I just posted into the JITSI chat a link to one particular file and exploration that we have been doing with JRL on ways to think about the semantics of this and to explore it through different kinds of implementation. This particular file shows a transposed weak representation in which there’s a WeakMap per async context instance indexed by essentially symbols representing the context that in which to look at the async context instance. The temporal context. I’m not going to try to explain it. But the main thing is that it’s not a big WeakMap. It’s many, many small WeakMaps. As the async context disappears it takes the bindings with it without having to do the –

JRL: Ephemeron collection.

MM: Thank you. I was trying to remember the word. By doing the transpose thing, the case that needs to be cheap becomes cheap.

JRL: So there are a couple different implementation strategies. Trade off, the big O notation of the run, the get, or the wrap. And MM trades a very cheap run for a slightly more complicated get but does allow automatic collection very easily that is a pretty nice perk. The implementation that I have shown prioritizes wrap and get and makes run a little bit more expensive. There are more advanced APIs or data structures to implement on top of and actual map or structure of the map that would have other trade offs between wrap and get and run. They have slightly different behaviors for garbage collections. Maybe that would be a better approach.

SYG: Real quick reply, if I’m on fix point iteration is the main thing we don’t want but weak custom GC iteration of any kind would be ideal to avoid. Maybe that’s not possible. Seems to – very glad to hear that folks are exploring different implementation techniques and will be some experience to build upon when we come time to prototype prototype.

DE: I want to express support for this feature. I mean, I’m not in the champion group but I’ve been chatting with the champions about this and for one we see this very useful within Bloomberg both for the OpenTelemtry case on the server and on the client where we need to track span IDs across async flows as well as in maybe a more Bloomberg specific case about host hosting multiple applications in the same thread and being able to attribute the current application. Anyway, this will be great for us if we can have it for those sorts of use cases, this comes up both for our web interfaces but also our like Bloomberg Terminal interfaces. So I strongly support this feature. I think the application of both the server and the client side is kind of clear and distributed. I hope we can work out the implementation problems. Which I hope – I still currently hope the solution will be we find that we don’t need any of this weak stuff and that it works well enough.

BN: Hello. I also just wanted to express my support for this proposal especially for Stage 1 and especially even in its current fairly limited API because I believe with just this, there’s a lot that we can build in user land including systems that would allow the total number of actual async context in use to be minimized (one could be turned into many) in a userland implementation Implementation. But I’ll digress on that. I really wanted to let people know that, in order to understand the proposal better myself, I made a custom build of Deno (a V8-based JavaScript runtime) with native support for AsyncContext, using the underlying V8 context methods for continuation preservation, if you know about those. It’s not perfect. I would like nothing more than for people to try it out (find the Docker image in the pull request that I opened on the proposal repository) and just go to town and send me any bug reports you can find, or problematic cases. Demonstrations of the problem/options with event handlers, for example. Anyway, really excited about that, and this proposal, and I support Stage 1.

DE: I just want to make a public service announcement, if you maintain a JavaScript environment, please do not prematurely ship AsyncContext. Things about it may change, are likely to change. I know that for a lot of server environments it’s be becoming quickly essential to support something that does this functionality. In WinterCG, James Snell has drafted an async local storage subset link that would map to async context. You know, please get in touch or join the WinterCG matrix or just look on the API repo for more information.

BN: I just want to say I absolutely agree with that. I’m not an implementer and not shipping this. Please don’t use it in production. Please do use it to test your intuitions and have a conversation about the proposal.

DE: I’m really impressed by your work and excited about it.

RPR: Please don’t prematurely ship seems to apply to most stage zero proposals, yes, especially this one.

DE: This is risky the way people are talking about it. There are questions should we make the async local storage subset and ship async context? I’ve been kind of in those forms saying, no, please don’t do that. It’s repeating it here.

RPR: URL from the queue: https://github.com/wintercg/proposal-common-minimum-api/blob/main/asynclocalstorage.md

JRL: That’s the WinterCG’s minimal subset of the local storage. And as DE said, this is everything that is implemented in this minimal subset can be represented as the AsyncContext or vice versa. They’re similar APIs essentially with slightly different names for methods.

SYG: Just to add on to what DE was saying about please don’t prematurely ship, yes, please don’t do that. And yes it is true that V8 currently has a field in the promise reaction called ContinuationPreserveEmbedderData that is exactly made for storing this kind of data data. But it is kind of un-multiplexed right now. Just preserves whatever you put in there. This is okay. Because all the current cases of data that’s preserved and propagated this way are not user programmable, things like priority that we want to do and things like the task attribution ID. When it becomes user programmable with AsyncContext this might need to change. So please prototype away. The API may shift under your feet as we get to actually implementing this.

JRL: I would love to have you as part of our discussions with the node folks. We’re actually looking to reimplement async local storage using the continuation preserved embedded data to solve the performance concerns that async local storage currently has.

SYG: Which is your timeline for that? I feel like yes I should be in the loop for that. Given it is Stage 1, what were you thinking?

DE: I think this is checked into top of tree in workerd, not in node. But in workerd. I don’t know. As long as we upgrade to the proper – as long as node upgrades to the proper API when rebase, is there harm done?

SYG: More work on the embedder side. If they’re willing to do that, seems fine. I’m just putting the PSA out there don’t expect AP I felts to build in this space because we never had this user programmable programmable.

DE: How about we note both of these warnings in the conclusion so they’re amplified.

JRL: Okay.

CP: +1 for the feature and similar use cases as Bloomberg. Thank you.

MM: So a strong endorse for this proposal. Justice and the SES group and a separate group of capability experts called fry AM have had meetings post posted to the JITSI chat a YouTube play list of the various meetings in which we explored this. There’s some really deep computer science issues here to understand what the semantics are of this and in what sense it does or does not introduce communications channels and took us a long while to understand why this is not threatening of the security properties. We actually wrote an attack and the attack is extremely narrow. The attack in the sense of violating what seems to be object capability security properties and the consequence is of doing that doesn’t violate anything we care about and we have a model to think about what it does violate and that model, the most clear form of the model also leads us to the conclusion that the overall temporal context being preserved by wrap should be cross-realm per-agent and shouldn’t be there is a separate temporal capture by wrap that’s per-realm. Doing it cross realm has better modularity properties and no worse security properties.

JRL: Thank you. We did several meetings in fact trying to analyze this. There’s a play list, YouTube play list from AGORIC that goes over this. Essentially found it’s only in information leak. It’s not a capability leak. You can’t actually leak an object reference or anything like that. And it’s easily defended against. You just have to be aware of the fact that you’re running in a system that has an API like async context. Https://www.youtube.com/watch?v=vECr5IDJzpg&list=PLzDw4TTug5O36HZTvv3OXaysvN__lky61

MM: It’s also the sense in which it’s an information leak is much narrower than what we originally expected. It can only leak information between entities that we’re able to communicate information any way. It does not introduce the ability to communicate where previously had been isolation. It’s only an issue of the degree to which the communication is observable.

RPR: We are at the end of the queue.

JRL: If possible I would love to go for Stage 1 now. If not, I want to answer whatever questions. So please is there any support for Stage 1?

RPR: Yes. From the JITSI people are choosing to thumbs up. Ben Neumann, CZW, Dan, Mathieu Hofman, HAX, ABO. I think we had about 7 messages of support during the discussion. I would say Justin you have struck gold here. This is very enthusiastic support for Stage 1. Congratulations.

JRL: Thank you so much.

DE: Are there any concerns that anybody has beyond what’s been expressed? Any suggestions of what to look into?

MM: One concern I have is not with regard to the semantic of the proposal but with regard to the how the spec, how the semantics is expressed in the spec. There’s an editorial mistake, if you will, a non-normative mistake we made in the way we wrote down the semantics of registered symbols which is we wrote the semantics down as if they’re shared global mutable state. And a very subtle that global mutable state cannot be used for communication channel and easier for us that could have written down the semantics that would have made it obvious there’s no global communication channel channel. I think we can also explore some of the exploration that we have done about different rewrites for modeling the semantics here that we should also do some of that exploration with regard to how to write down the semantics in the internal spec language so as to avoid the appearance of more shared mutable state than is actually implied by the semantics.

JRL: Okay. Happy to work on that. We haven’t written any spec text yet. But that would come as part of the Stage 1, Stage 2 process.

SYG: MM, what are you talking about? ECMAscript spec that you consider there’s editorial mistake?

MM: The editorial mistake in registered symbol is that the symbol registry is written as if it’s global shared mutable state and then it’s a subtle theorem to derive there’s no communication channel.

SYG: Help me understand the relevance to async context.

MM: Yes. JRL’s model, the models that JRL presented of the semantics using code in the presentation is in terms of manipulation of the __storage__ variable and further more that the maps held on to by by the __storage__ variable lead to the bound values and starting from there, the notion that this should be a cross-realm and thereby also cross-shadow realm shared muteability is extremely scary starting from that model, I would think to be very scared to make this a cross-realm thing. It’s only because of the model makes us think in terms of shared access to mutable state that are not needed to account for the semantics of what’s actually pro proposed here.

JRL: So the simplified model that I’m presenting in the slides makes it seem like there’s an actual shared mutable state when in fact it can be written in terms of a way that it’s not shared mutable state.

MM: Correct.

SYG: Okay. I’ll look for it in the spec draft then. I don’t think I quite got it from just being explained.

JRL: So we have a repo, please open up any issues you may have on the repo, we will transfer to the TC39 org when this presentation is done. We have a matrix chat that I would love to make public so that other people can join and discuss whatever they want. And I think that’s it. Thank you so much.

Conclusion

  • AsyncContext is promoted to Stage 1, with explicit support from several delegates.
  • Future work will be needed to develop in terms of investigating the optimizability of this proposal as it scales into more variables, in particular to avoid memory bloat, possibly through limitations on the creation of AsyncContext variables. Develop a definition of semantics as this proposal interacts with various environments, e.g., how AsyncContext is propagated across events, e.g., on the web. Consider editorial improvements to the ECMAScript spec to ensure coherence in the presence of AsyncContext being inherently cross-realm, and per-agent.
  • PSA: Don’t ship AsyncContext in your environment yet, as the API shape may change. If you need something for this capability right now in your environment, consider https://github.com/wintercg/proposal-common-minimum-api/blob/main/asynclocalstorage.md
  • PSA: V8 advises against shipping usages of ContinuationPreserveEmbedderData as it does not handle multiplexing multiple usages, and is likely to change in the future as we consider AsyncContext.
  • You are welcome to join our public Matrix chat for this topic.

ArrayBuffer transfer for Stage 3

Presenter: Shu-yu Guo (SYG)

SYG: This is ArrayBuffer transfer and some friends, some related getters and functions. So the recap. This is a proposal that was broken out of the resizable buffers proposal into its own proposal and the reason was that the original semantics as part of the resizable buffer’s proposal did not preserve resizability when transferring an ArrayBuffer. We found it confusing but surprising. So this was broken out to give some more time to change the behavior. In the process it was demoted to stage 3 to 2, at the plenary last year. So it’s fairly small. Just adding a few – 2 methods and one getter. So I think it had enough time to bake since then and representing for readvancement to stage 3 with the new semantics.

SYG: The new stuff to add to ArrayBuffer are transfer, which makes a copy of this buffer, meaning literally this, the receiver, and then detaches and returns the copy and transferToFixedLength. If you were someone who read one of the earlier drafts of the proposal I had called this fix, and everyone else thought that was a terrible name and changed to the transferToFixedLength, which is what it does. It behaves like transfer, but returns a non-resizable buffer and I will go into the cases of when resizability is preserved and what that means in future slides. Getter is get detached. This is the authoritative way to find out if an ArrayBuffer is in fact detached. We do not have this in the language currently. All right.

SYG: So for motivation, why would you want transfer? This example, consider you have this validateAndWrite function, where the validation is expensive. You await for the validation to finish and then write the ArrayBuffer data to some disk file or you persist it to some storage. And the way you would use it is, the way you use it is like below. The program is there’s a bug in this code. The bug is the validation is asynchronous. Depending on how things are timed, that setTimeout could overwrite the data to write to disk in the ArrayBuffer after the validation. So between the two awaits, in the validateAndWrite function, the timeout could mess you up basically. After two awaits, that timeout would overwrite the data, it’s in fact not nsafe. What you can do today to get safety? You can copy. You can copy with slice, but this is slow because you have to do a copy. With transfer, what you can do is, you can take ownership. This is a very limited notion. I am using a lay definition of ownership. Now Rust has been on the scene, there’s a more sophisticated notion of ownership, but we have a simple notion for ArrayBuffers, which is detached. And what – with transfer you can make this faster by transferring and detaching the original, validating the transferred thing and then writing that. Because of lexical scoping, you can have transfer of the ArrayBuffer because there’s no closures, you are assured after asynchronous validation and firing off the asynchronous write the data is exactly as you expect it.

SYG: So why is this faster? If transfer copies? It’s specified as a copy. But if you call transfer without changing the length of the ArrayBuffer, it could be implemented much more efficiently under the hood as a zero copy move. And even some calls with changing the length could be implemented more efficiently than a full user length copy. If things are aligned in a certain way, you can grow that without having to grow new physical pages and so on. But because the spec can be implemented more efficiently. So going into what the actual semantics are, it takes an optional new length argument. If you don’t pass the new length, it’s set to the current length. It transfers to a new ArrayBuffer of the exact same length. If you pass a negative length it will throw. If the receiver buffer is not resizable, then we preserve resizability, but if the receiver buffer is resizable, the return buffer is is also resizable and has the same max byte length. Resizable buffers are resizable up to some maximum and this maximum is preserved by transfer. If the new length you pass in is greater than the max byte length you get a range error. Any new memory in the new array are zero.

SYG: So transferToFixedLength behaves exactly like transfer except it returns non-resizable buffers. Here I have a cheat sheet for what the range conditions for the new length for these various 4 cases. If you transfer a resizable buffer, then the new length must be greater than equal to zero but lower or equal to the maximum byte length. In every case which is the transferring a fixed length buffer, you transfer to fixed length to fixed length buffer, all you have to remember is the new length must be greater than or equal to 0 the specification of implement achesses buffers, there’s no max for the other three phases.

SYG: The other friend is the detached getter. It’s just a getter without a setter. You can tell if the buffer is in fact detached. Because of the history of how ArrayBuffers were specified, detaching and figuring zero out – so it was omitted from the original spec drafts for buffers to deif they have detached. It was confused because like how do you observe detached. Some methods throw. Others return sentinel values. When T39 took over the spec, it should throw, at that point, the focus didn’t update. A few years ago, RKG from PlayStation PR’d and got consensus for normative changes to reflect reality, where on detached buffers we codified the sentinel values for getting index elements. That’s all to say that the current state for detecting whether something is detached is complicated, but it’s useful to know if something is detached. So we are adding a detached getter. It might be good to mention that engines all have this in their engine API’s anyway and Node maybe exposes something to user space, but I am not sure. Overall, it’s good to have a small thing.

SYG: One question I want to address here is that a discussion item that came up last time for MAH, was why not copy-on-write buffers?. This is in the service of performance, if we ignored the detached bit, there is no native API to detach something. This is performance that could be transparently had if you implemented copy-on-write ArrayBuffers. Instead of transferring, if you just kept with a copy, but underlyingly that copy is is copy on write, you don’t incur the copy until you mutate the array. Wouldn’t that solve the problem? On paper, yes it would. But the problem is, why would V8 not have implemented copy-on-write buffers. We consider it important security mitigation to have the data pointers. This is like the pointer in the C++ of the ArrayBuffer data. We want that data pointer to be fixed. After you allocate object ArrayBuffer, you don’t want the pointer to move so if there’s bugs in the JIT, like it’s an important optimization that we bake into JIT code for performance. If there are mistakes in the JITs and we move that and bake that pointer in, and we move that pointer due to copy on write that opens up a whole class of letting-you-access-arbitrary-memory bugs. We consider this an important security mitigation to have the data points of ArrayBuffers to be fixed this. The same reason why the resizable buffer have a max length, to keep this, we want the data pointer to be fixed after data allocation. If you want to implement copy on write ArrayBuffers the portable way to do that is move the data pointer because you originally have it point to the original backing store, and upon first mutation you do a copy and then repoint everything to the new copy data store. That kind of move would destroy the security mitigation. And for this reason we have never implemented copy on write buffers and we don’t plan to. If there were ways to do this without moving of the data pointer, in theory we would be open to it after assessing complexity. But the only way to do it portably requires deep integration into each OS. And yes. It doesn’t seem like a complexibility we want to take on. We do have copy on write arrays because the same security mitigation concerns don’t really apply to arrays.

SYG: So before I move on to the open question for API design alternatives, I will turn to the queue, any questions about what is presented so far?

MAH: So I have nothing against adding the transfer API. I want to get that out of the way. However, I really love if we could actually check if it’s possible to do copy-on-write because there are lot of optimization. Like it’s a value optimization for any code. And I am not the only one to think that. I have seen conversations on twitter of people asking for this and why is it not there? And one of the thing like none of us understand is, if there is a detached check today for ArrayBuffers, how is there not a “this array was copied” flag? Is that not the same equivalent check than the detached check? Obviously, if you – if the ArrayBuffer becomes detached you can blindly follow the pointer to the data. So I don’t understand the security argument here. And yeah.

SYG: I don’t think detached buffers get freed immediately, do they?

MAH: I mean, if you detach the buffer, you cannot use the ArrayBuffer to access the backing data immediately.

SYG: In the language, that’s true. I am talking about like this is not a question of impossibility. This is a question of, in all the things that could go wrong in a complicated VM, like V8, if JITs embed the wrong pointer, we don’t want it to move after detachment. If we don’t free it and reuse the virtual memory pages right away, we still have this guarantee. Like this is – like, if the JITs cannot be trusted, we want more mitigations on top that build confidence for security

MAH: If the buffer is detached, you’re not allowed to adjust code to assess the ArrayBuffer anymore

SYG: In a bug free implementation. What if the JavaScript implementation is buggy?

MAH: Yes. I mean, you already have an invalid execution here.

SYG: So what? We still don’t want a render – escape –

MAH: Are you saying that a invalid JavaScript execution is okay, but if you were – like, the pointer got moved somehow that would be worse than invalid JavaScript?

SYG: Yeah. That’s what the security team is saying. Like this is like a blast radius kind of thing. It’s not – there is no binary point, where okay, the JavaScript execution is buggy. Therefore, anything goes. In light of a buggy execution, how do we limit the kind of exploits that are possible

MAH: I don’t understand either because for copy on write, by definition, you are making a copy. So the original pointer is pointing to data that was held there. You are not going to go anywhere wild, you’re still in data – instead of being copy, it may end up something that was mutated. So what I don’t understand is that like you still end up in an invalid JavaScript execution. You don’t end up in uninitialized memory.

SYG: If you transfer to a different length, if the API were limited that you can always transfer – you can only transfer to same exact byte length –

MAH: I am not talking about transfer but ArrayBuffer slice. You make a copy of your ArrayBuffer. But you don’t want to incur the costs of allocating new memory. So both ArrayBuffers actually point to the same memory behind it, but they have a guard on any write operation to make a copy at that point.

SYG:That’s correct. Yes.

MAH: So at worse, if you have a bad implementation here and in the JIT, it will point to the original buffer until you get to point to the..

SYG: And the copy and the original ArrayBuffer have independent lifetimes.

MAH: Yes. But . . . that would be the same for the detached – like, at that point, you’re in the same problem of the detached ArrayBuffer that got detached in that – and now the –

SYG: But because the detached like – sorry. Okay. This might be better taken offline

MAH: Yeah.

SYG: I am not sure how much value is there to hash out for the plenary team to hear this. We are not arguing impossibility. They decided this was not worth their time. And you are saying that maybe, you know, if you look at it a certain way, it’s fine and that could very well be the case. I haven’t did you go into deeply. I don’t think anyone has like did a really deep dive and say, how you implement it. It’s not complicated to – not too complicated to maintain. It won’t cause you bugs and won’t cause any more. If you convincingly can make that case, we can implement it. That hasn’t been the case thus far, and I don’t really know what to say other than patches welcome. If you want to write a design document and security folks are convinced we could have this optimization.

MAH: Okay. I would like to hear the options are still open at least. Thanks.

DE: There are some kind of higher level ideas for how to deal with the patterns that transfer is currently intended to solve. So in particular, this kind of terrible between either API grabs ownership of the buffer in one way or copies it. Those are bad traces. So I hope that although I will support this going back to stage 3, change is proposed, if we do want to work on a better solution to this, which will kind of more deal with the copying of objects, Dominic has an interesting gist that people could look at. It doesn’t involve any of this copy on write stuff and based on ideas from Matthew

SYG: A generic taker thing would be pretty cool

DE: Yeah. Anybody want to campaign a proposal, I would be happy to work with you on that.

SYG: Yeah. I guess we don’t have too generalized a need. The mechanism would be great to have in a language, trying to convince myself it has broad utility. We have ArrayBuffers. If we add more, then yes. Let’s do that.

DE: Yeah. I can’t think of anything else that makes sense to deal with that way. This is basically like, it’s basically like a run time borrow checker. But very, very simple.

SYG: Yeah. Yeah. Do check out that gist, folks. It’s nice and simple. Thanks.

SYG: Before moving on to asking for Stage 3, there is an issue open, number 6 of API alternative, which is I am proposing here these two methods with basically identity functionality except one returns fixed length and one preserved resizeability of the receiver buffer. What if you had one single method? If you had one single method, how would you communicate to the API that you want to preserve resizeability or you want a fixed length, you want an options bag probably. The pro of having one method. And the con is there is an increase in flexibility. If the fixed length behaviour, you have to pass undefined for it to get the current byte length of the receiver. So it’s maybe slightly less ergonomic for the common use case. I am on the side of keeping with the current design of two separate methods. Transfer being the majority use case, having just one single optional argument with no options bag, and the longer transferToFixedLength with the same signature that identity behaviour. I feel that’s a little bit better. I don’t feel super strongly. I don’t think this kind of method needs the options bag. But this is – those are just API design opinions. Be happy to hear if folks have thoughts here.

JHD: Yeah. SYG and I talked about this with the name change, the previous name was less clear. I don’t think it matters which choice we pick. I thought it was worth getting a temperature check of the room as to whether one more complex method or two simpler methods is preferred - and either one is fine.

SYG: Okay. I will give it more time for folks if they have opinions. If there are no strong opinions, I would prefer to keep with the current configuration of transfer and transferToFixedLength. Taking that as nobody has opinions here I will move to asking for Stage 3. Thank you for the last-minute call for PHE. And RBN. Thank you very much for doing the review and suggesting editorial improvements, and thanks to MAH who had good editorial improvements. The reviewers have signed off on this. I would like for this proposal to get back to Stage 3.

MM: +1. Yeah. I like it. I am glad that you’re open to the issue that Matthew is raising. I would like to see that investigated. And with regard to the media question, I am also on the fence. But I am perfectly happy with the two simple functions. I think I slightly prefer that anyway.

SYG: Cool. Thanks, MM.

SYG: All right. I will take that as Stage 3. Thank you very much. Yes. I got one positive from rock(??). Do we have any other explicit support?

JHD: I am a champion and I support it, if that counts

ACE: +1s as well

DE: I don’t know. Yeah. Good to get explicit support, but also fix ups like this, it’s maybe not quite the same bar as having extremely broad support. Nevermind.

RPR: Okay. Any observations? Or any – weak nonblocking – weak concerns? Okay. All right. Congratulations, you have Stage 3.

DE: Sorry. On the last topic, it seems like MF in the chat has some concerns about the name. Can we jump to that because . . . you are saying we should have spent more time

MF: DE, I don’t have concerns about the name. I think it was an improvement and there’s no ideal perfect name to choose. Just that I think we may have been able to come up with some better names if we had more time. That’s all. We should try to avoid last-minute changes. I don’t have a problem with the current name.

DE: So, I mean, I am a little confused by this. Let’s follow up off-line in the chat about what kind of process you want to exist. We could require some kind of waiting period or something. It sounds like you have a concern.

MF: Sure. We can talk in the chat.

  • Note: Discussion in Matrix showed nothing further to follow up on re: process. If MF had bigger concerns about the name, he says he would have blocked.

Conclusion/Resolution

  • Stage 3 with the names transfer and transferToFixedLength.

Intl era and monthCode for Stage 2

Presenter: Frank Yung-Fong Tang (FYT)

FYT: Hi, everybody. I am Frank Yung-Fong Tang. I work with Google. In the international linearization part of that. So today I am going to bring you a proposal for Intl era and talk to move Stage 1 to Stage 2. So first talk about the motivation. What we really want to do is try to satisfy the minimum necessary details of era, era year and monthCode usage for temperature for [ja*], other than iso8601 what that means is, so far if we only look at – all the proposal, only look at what are the official edition of ECMA. The mention of era is already there but in a different setting. In the DateTimeFormat object, we have the controlDisplay in the object bag of whether you display the era or not display the era. Let’s say you display 2000 CE, something like that or not display 2000 BC. It’s controlled, but showing the era or not showing the era… there’s an issue. So not just talking about era display. But the current spec does not itself have a need to pass in an era code. Or return an eraCode. There are forming to parts in the DateTimeFormat to indicate a particular task, the formatted result is an era, but that, when they talk about that is era, that means that’s the textual form in the localized context. That part is the era. There’s not a corepresentation of passing as a parameter in the program identifier, it’s for either input or output. The first time we really have to deal with eraCode is actually in Temporal. In the proposal in the main part do not actually define era and eraYear, but in the Chapter 15 or 16, the part that amend ECMA-402 add that getter to in several objects to return an era, and eraYear as a property of those objects. And for iso8601 calendars have defined those as undefined. And for other calendars, it will have eraCode, but there are no definition of what they are. So we don’t really have a set of era, which could be used to pass in or return. Not even for the Gregorian calendar. In the syntax for the monthCode, like N01 to N12, but the semantics of monthCode, which problems are obvious for iso, some other calendar is neither defined and it could be very confusing. For example, for Hebrew calendar or Chinese calendar, is N05 or N06 the 6th month of the year, or a particular year could consider it the sixth month but may not be the sixth month because there’s leap month before that. It could be the seventh or fifth month. It’s confusing. So I think we need to have a minimum detail in the interface about what it means. And we probably should leave the calculation for a later proposal, where not specified or defer to other standard, but in the ECMA-402, you have to be at least defined what the code could be passed for the interface and could be returned by the method too. So that’s the motivation.

FYT: So the scope. We tried to narrow the scope. The proposal is already in stage 3, so we tried not to touch it because at least in the developmental phase, we tried not to bring in too much complication to the development of Temporal. But the part that to amend to, that part we could separately, independently in parallel develop this proposal so it could be implemented both time for any engine and they could say "we only implement the ECMA-262 version, ship it" or "we want to implement the not ISO 8601 calendar, but we need a clear definition what the code is have to combine with the proposal to address that". So the focus is really on calendars other than ISO8601. So we really feel there’s a need for the additional requirement, the need for implementation purpose and that we need to define some detail. Not too much detail. But enough detail that could be merged into 402 spec for the usage with Temporal for these other calendars.

FYT: So let’s look into this. So let’s talk about a little bit history. So the history is this: in the May 2022, in the TG2 meeting, we had a discussion we need to have such need in forming the proposal. And so after discussion of that in June 16, I open the repro for Stage 0, and I put it into the Stage 0 page; and then in October, original I was naming Intl Temporal, but it referred to as the scope of this proposal is actually pretty limited and could be a little bit misleading. It only touches a very smart part of the proposal. So they suggested we change the name. So we change it quarterly. In November, we come here and agreed to advance to Stage 1. During that discussion, there are couple questions raised and we take it very seriously, and I will tell you more later on. The thing is that also, during that discussion, one of the questions in highlight is if this is the correct standard body to define this? Should this be, instead, discussed in CLDR and our spec refer to it. And that’s a very interesting and very insightful feedback. We take it seriously. In high level, I will tell you the detail later. But in high level what we did is that in December 7, I filed some bugs and went to CLDR TC and discuss with them and share in CLDDR TC, and basically they agree to accept the particular file. And the CLDR chair basically say, well, we – CLDR release that once every six months. So the next CLDR will be released 3 months from now and they are close to close up the changes in review process now. In that time, the agreement in CLDR is that we should form a working group and to define the eraCode and target the CLDR 43 and starting to define, then, this particular proposal will start to refer to that. We still need to point out that we follow this standard of the other nonexistent standard, we have to at least define thing we referring to. And they nicely agreed to chair the working group. And in early this year, just like two weeks ago, the working group [...] and have a draft.

FYT: So in this particular proposal, is that we tried to limit our definition of proposal to find a set of calendars which are already defined in CLDR. We are talking about which eraCode to define for the calendar. We tried to only limit it to what are already defined CLDR, because there are other calendars we know of in the world for which is not well-documented is not yet have an identifier. For example, [...] we all know they exist, but have not yet been listed in CLDR data, a calendar ID. For those thing we try not to include yet. Later on we may amend that. The second thing is, for each of this calendar which are already defined in CLDR with ID, we tried to define a set of valid era and monthCode for that particular calendar. For example, for Gregorian calendar, if they pass M13, we should throw exception. It’s an invalid month code for Gregorian. However, for Coptic calendar, every Coptic year has 13 months. The 12 big month. Like 30-day roughly. About 29, 30 day. But every year, there’s a small month, the 13th month, with 5 days or six or 7, depend on whether the leap year or not. One leap year will have one small day, and a 13 month. In that case we should take M13 as the 13th month. But not M14. Right? And also for Chinese calendar, there’s leap month was let’s say you have March, the third month and after the third month of the year, have a leap month, which many Chinese referring to leap third month. N03L, so basically, Chinese calendar should be able to have M01 to 12 and M01L to M12L. But not Gregorian. You should throw exception. So the set of monthCode should also be defined what is a settable set for that calendar. Similar to era. What era could be acceptable. That is something to try to define. And we probably also should design the semantic and high level of what era and eraYear are, and monthCode as I mentioned before. Right? Maybe if there’s something very simple, we can define a conversion. But that is something – that part we can still discuss in Stage 2. Probably not for all the calendars, because a lot of calendar are very difficult to tackle. For example,Buddhist calendar, the difference between that and Gregorian calendar is – how do you say that? Is the starting point of the zero. Right? So today, I think I forget which year, there are 2005 or something. It shifted a couple of years. It’s not exactly the same. Similarly, the ROC calendar in Taiwan, it’s shifting the Gregorian era and starting from 1912. So those are very simple shifting of era. That might not need to be. We think the semantic, not the algorithm should be defined because that is the API surface. What kind of thing to get in and what kind of thing could be returned should be defined, in terms of algorithms for this kind of logic is complicated and we may not be able to do it and not necessarily need to do so. Of course, there is a desire to do that, but that is yet another topic.

FYT: So since I am going to bring up to Stage 2, in TG2 we have additional requirement. This is not needed for 262 proposals, but in TG2, two years ago we have this agreement, anything we want to bring up for Stage 2 we should pass three tests. One is prior art: is anything like that been before? We try not to invent new thing. We try to have something proven to work. The second thing is the proposal, difficult to implement in userland. The third thing is whether there is a broad appeal. Is there really wide usage for this specific thing. So here’s the reason to justify this is needed: is because we doubt a clear set of era codes and semantic of monthCode of each calendar system. The JavaScript language cannot easily implement this without the ambiguity. Until we define that clearly, my fear is that different browser engine, when support Temporal with different calendar may accept different set of eras. Or whenever the set month code that were treated differently. So that’s the only minimum we are trying to avoid. And here is one example. The top part, I copy from a preexisting Temporal test in stage 2, this is not actually a good task because the blue parts you can see are actually undefined. There’s no place in 262 or 402 or in the Temporal proposal currently defined the token ce. You think that’s common sense. Right? But there is no way ce is acceptable. What does that mean? So, you know, in a way, this current test is kind of not a valid test for Temporal at this point. Even though it probably should be. And therefore we need to define acceptable codes and bring forth whatever we mean and therefore to make this legal. Right? The other two other example. One is showing you, for example, by using BCE era. And year -0001. - 0002. - 01. Because there’s no year zero for Gregorian calendar. And also for Japanese, they have different era codes for ?? things like in 2019, so 2023 . . . February ‘23 will be year 5, and the monthCode will be 2 This is the first day this year. Japanese empire.

FYT: Prior arts in era, most are using numeric for int. Microsoft .NET have calendar years. Java, int. Java have a newer API code. Java error. They do have defined that thing. They also have defined a particular enum for that. Javaera. The java era also the new API actually have a value of method returner string. Only for java era. They have ICU4X, they have several classes that have string era codes there. Those are the prior arts. Most of the prior arts actually in the int. But we currently don’t really think int is a good way to pass this. Maybe it is acceptable but we really think that should be a string value. Not just a string of a number. But a string of text.

FYT: So we bring the thing for stage 1 advancement in December and we got two very important feedback. One is from API and he says "was CLDR. Like the people working on this, have they been posed? Are they against creating a identifier for those to we choose the solution for it? Are they okay with this?” This is the code from the text with some elimination. But I think I capture his main points. Is it okay for us, TC39, to define this? The second thing is around, they mention that – I think I mentioned in the past we have some termination of some other fill. But he mentions that most of the things for are not specific individual language or calendar system like that. So I think both show the preference that we don’t do this work. We let CLDR do that. It’s good feedback. We go to the following path to address whatever that true feedback. One is, I filed an issue to track the issue. And as I mentioned I talked to CLDR TC. Working group, they during that period a PR 6225 to the 35 and the CLDR for that. And the proposal have been shown to ICU TC a lot of overlapping between this and CLDR. And they have some minor comment, but basically agree. So that proposal is currently under review by the CLDR TC. What we try to do is I changed the spec text to return to assuming that thing will be put in. It’s not yet, but probably will be published in April 2023. So I am currently thinking to whether update the proposal so far. And I change the spec referring to that. And I also only define a subset for that defined era. The reason for the subset definition is that we tried not to bring in the pre-Meiji Japanese era into the JavaScript standard, and the reason they told us, what happens is, CLDR have definition of 264 or something Japanese era. But only 5 or 6 are reliable and have useful meaningful. The Japanese pre-Meiji, about 1860 or something, I forgot the era, is really meaningless for Japanese calendar. Although the era exist, historian have argument when it started. And Japanese not using Gregorian calendar today, but the Japanese lunar calendar. But they are too difficult to get rid of them in the CRTR. So our proposal is that not to include those thing in the definition. So here is the draft of spec text. One of the Stage 2 requirements is to have initial draft. As you see, I put a table here to show from the changed proposal in CLDR to list what calendar have what is there is what kind of era could be acceptable for the calendar and whether to have aliases and how to map those aliases to that. For example, we will have Gregorian, Gregory as a calendar and Gregory as an era and CE and AD list for the Gregory, which are both accetable. But whenever we return, we only return Gregory as the era code in some case, time have that. What is the range of era acceptable or not? So for a lot of calendar, they are from negative infinity to infinity. For example, for Gregory, 0 shouldn’t be accepted. The minimum era year is 1. It shouldn’t have a zero BCE. Detail we can discuss. But we list here the thing. And so on and so forth. And there’s a process to connect the code to the era in calendar, which era year for the calendar is valid, I haven’t figured to plug that in, it should probably reject if the era year didn’t make sense. And so – which is a valid month code for the calendar. For example, most of the calendar will have M01 to M12. But then Chinese may have another 12 possibility. So on so forth.

RPR: FYT, there’s 5 minutes left.

FYT: Yes. So those are some initial drafts. This is the entrance for Stage 1, which is passed. But then I come to here and asking for entrance to Stage 2. And just to remind that the entrance criteria for Stage 2 is to have spec text, which I will show you. And also, what that means is the committee expect the feature to be developed for inclusion in the standard. And I see two things. The consensus for approval for that and also if that, I need 2 or 3 Stage 3 review at a stage. So any question?

RPR: Any questions on the proposal?

RPR: Okay. No questions.

RPR: Any positive support for Stage 2?

USA: +1 for Stage 2.

USA: Thank you for being very receptive and I support Stage 2.

MF: This might be my misunderstanding of how aliases are intended to be used, but I see like we have a single letter alias for the Japanese eras and it doesn’t include any of the era names themselves. Like the actual Japanese characters assigned to that era name. Is that intentionally omitted, or is that accidentally omitted?

FYT: That’s a good question. What I try to do is reflecting what is proposed in the CLDR. And my understanding is intentionally. And this brings up the interesting place, right? We try to define it here, or we try to just copy whatever got defined in CLDR. Therefore, I try to discuss whether that is a good or bad thing here. If we tried to define it here, then that’s a good place to discuss. If only thing we try to define is try to copy from them, then the discussion should happen in CLDR instead of here. That’s what I tried to discuss here, but that’s not the feedback.

MF: I agree with that decision to defer those things to CLDR.

FYT: For the clarification answer is that is in the PR 6225 and you can look at it. That’s defined era. Whether it’s good or bad thing, that’s a different question

MF: I guess rewording of my question would be like, you said this was a subset of what CLDR has defined. Have you chosen a subset that doesn’t include those or are they not originally included in the data?

FYT: First of all, these are not included in the CLDR. This is not in that particular PR, but a different PR for that because there’s a complication there. In a sense, that two lines are not in that particular PR. But the idea is if the two PR got accepted by CLDR we will mimic here. The thing we chose are not included here are mostly from additional era, but not kind of the aliases and stuff. But if we do think that is not a good idea, and we need to be exclude that, I think that will be acceptable to discuss during the Stage 2. So if you have the concern about that, I can open a ticket and discuss that more.

MF: I agree. We can discuss more during Stage 2

RPR: We are at time. SFC has quick reply.

SFC: Thanks Michael for the feedback. I will say that there’s about 3 or 4 changes that I need to submit to the pull request CLDR, allowing the kanji as aliases for the Japanese eras is a very good one. I think I will try to include that.

FYT: Wait a second. I don’t think that’s his suggestion. Is that what he’s asking

MF: It is. My suggestion was, I guess more of a question . . . it was about additional aliases

FYT: Oh, additional. Okay.

MF: Yeah

FYT: I thought you were talking about H and N here. I see. Okay

RPR: We have heard one expression for support from USA. Is there a second? Are there any messages for any other positive messages for advancing this?

RPR: JHX has + 1 for Stage 2 - EOM. Thank you. All right. And no concerns

RPR: Congratulations, you have consensus for Stage 2

FYT: I also need to ask for to 3 people for stand up for Stage 3 reviewer at this point.

RPR: Who wants to be a stage 3 reviewier.

RPR: EAO volunteers in the chat.

FYT: Thank you.

SFC: I will be another

RPR: And SFC. Thank you.

RPR: All right. Good. You have two reviewers.

RPR: We will be back at the top of the hour. Ask just a reminder we added in the extra agenda item for decorator export. We will be at the end of today. All right. See you soon. Thank you

Conclusion/Resolution

  • Got explicit support and consensus to advanced into Stage 2
  • Stage 2. EAO and SFC volunteered as Stage 3 reviewers

Temporal, naming of .calendarId and .timeZoneId

Presenter: Jordan Harband (JHD)

JHD: This is about the Temporal thing from yesterday. I would have loved the opportunity to discuss my concerns in the call with champions prior to plenary, and thankfully I was given that opportunity today during the break and have a better understanding on constraints and pressures on Temporal, and what criteria changes need to be acceptable, and agreed to the Id spelling so we can move forward. I want to confirm that everyone is okay with the consensus on that item that I was the lone objector on yesterday. I will assume they are. I want to make it official.

RPR: [Surveyed the committee for objectors] Still good.

JHD: Then hopefully that can have consensus, and one less item we have to talk about.

Conclusion/Resolution

  • Consensus on Id spelling in properties as presented yesterday

Symbol predicates

Presenter: Jordan Harband (JHD)

JDH: This proposal was originally championed by RRD and ACE and myself. RRD is no longer employed by an ECMA member and not participating at this time. But hopefully ACE and I can continue to move this forward. As a reminder, this proposal adds ways to help differentiate the different kinds of symbols. The current specification text that is written includes Symbol.isRegistered and Symbol.isWellKnown. Since there are three kinds of symbols, we need two predicates, and it’s a bit trickier to define a terminology for the sort of “unique symbols” or “unforgeable symbols” (or we could come up with something) but it’s not as straightforward as the other two categories that are “registered” and “well-known”. With “Symbols as WeakMap keys” hitting Stage 4 yesterday, it becomes very helpful for people to be able to use these predicates to figure out what types of symbols they can put in a WeakMap or a weak place or not. I’ll drop the spec text in the matrix channel and if someone would like to present it, feel free. https://tc39.es/proposal-symbol-predicates/ Personally, I think it’s already complete and would qualify even for Stage 3, but I’m not asking for that today. Hopefully it illustrates how relatively straightforward this spec text is. My hope is that I can get Stage 2 for this proposal today, and get reviewers and so on, with the intention to move to Stage 3 in the next plenary.

JHD: The only open question on this proposal is an issue about whether these should be static methods on the symbol constructor or prototype methods on symbols. Or alternatively, if they’re prototype things, they could be accessors or something. There’s a potential design question there. If the committee believes that that is a major semantic semantic, then resolving the question needs to come before Stage 2, but I don’t think that’s a major semantic. The semantic is the same - the returning of a boolean - based on if it’s one of the two categories. My hope is we can resolve that question within Stage 2. Of course, I defer to the room on that one.

JHD: The first question is, are we comfortable with Stage 2, and addressing that question within Stage 2? Is there consensus for that? I would love to go to the queue.

USA: On the queue we have mark.

MM: I’ll go ahead and talk. So I’m a +1. I like this. I’m looking forward to it. Of the options there, I prefer the static methods. I think so to speak instance methods on primitives are weird thing any ways since the methods are on the prototype. But I do think that the static methods versus instance methods could be resolved in Stage 2. I don’t think they need to negate Stage 2 but I do prefer the statics.

JHD: Thank you Mark.

USA: Next up we have Shu.

SYG: I’m completely aligned with MM there. I will just support Stage 2 and also prefer static.

JHX: +1 for Stage 2.

JHD: We have consensus for Stage 2. I heard preferences for static methods. If there’s anyone who would not be content with Stage 3 with static methods, I would love to hear your feedback in advance of the next plenary. If you have not commented on GitHub please do so or reach out to me privately. Thank you.

USA: Congratulations JHD.

Conclusion/Resolution

  • Stage 2 for static methods

Decorator/export ordering

Presenter: Daniel Rosenwasser (DRR)

DRR: So hi everyone. I’m Dan. I’m presenting with RBN. We’re both from Microsoft and work on the TypeScript team and want to talk about an issue with decorator ordering today. So for some background, decorators are a proposal that have gone back to 2015 at this point. You know, in 2015, TypeScript implemented and shipped an early version of decorators behind a flag called experimental decorators. Now, decorators reached Stage 3 recently and TypeScript 5.0 beta immaterial implements decorators as per the Stage 3 spec. The plan is to make that default; however, old style decorators or the experimenter decorators that we shipped are still available under that flag. And so the plan is to ship the full version of TypeScript 5.0 stable in March. However, that will come after the March plenary. Now, TypeScript’s early version of the decorator syntax differs slightly from the current proposals. We would like to help the users transition to the standard. Now, the style that TypeScript originally shipped is what I call decorators first. The decorators come before the export keyword if you’re exporting a class. However, the current standard specifies that the export keyword comes first, then the decorators, then the class that you’re exporting. Now, why the change between when TypeScript originally implemented it when it was originally suggested and now? It was discussed extensively and there were many differing opinions. One major justification was, well, if you ensure that the export keyword comes after the decorators, before the decorators, then there’s this theoretical distinction that you can make where semantically there’s like a difference between decorating what is exported versus what is the local. And in the future, you could have decorators that come before the export but in the five years that that issue has been opened, no one has really suggested like a practical use case for doing that at all. So on top of that, we just think that that’s a pretty big foot gun. For example, take this for example right here we have a class call called foo and imagine that the decorator does something to the exported version of foo but locally every reference to foo in this module refers to the local. That seems like it would be a pretty big foot gun where every new foo actually creates a local, the local version. But whatever people refer to outside of this module actually refers to the decorator thing. We think this would not be something we even want to pursue. We think it would be a big usability problem, that people would commonly mix this up because they would just be tempted maybe esthetically and end up with bad behavior as a result of this. So between those, we also have witnessed that there’s not really a lot of positive feedback on the change since then. In discussing the matter with many existing framework authors using TypeScript decorators most framework authors actually preferred the style where decorators came before the export keyword which is what we shipped. But it seems like we just have not wanted to revisit this because no one wants to deadlock the proposal, people just want to ship something. And, you know, there is something to be said about the fact that there is an existing community of people using a JavaScript variant called TypeScript and shipped with its own flavour of decorators for eight years and the style that has been changed – that we’ve changed to within the proposal which is brought up five years ago, never got any hand from our side. No one ever said, well, the current direction seems to be export before decorator. We would greatly prefer that. That’s never really happened on the issue tracker except for maybe one person who was trying to use the battle transform and just wanted their tools to work together. So where are we? Our team would not support any sort of future where there’s a semantic distinction between export followed by decorators and decorators followed by export, for the foot gun reasons that we mentioned. However, if we want to allow both, that’s something we’re open to. But we’re coming here today to ask whether or not we can just make a change overall from the current place where we are at. So we don’t believe there’s a future for semantics based on order which I what I just mentioned. We already have a large community of users in TypeScript who have been using this feature for about eight years now. We would prefer not to have to force them to do a syntaxic upgrade because you can write decorators that permit being called by both the old and new styles. And then anecdotally, what we heard from library authors and users is they prefer the old ordering. So we have two options that we would like to suggest to the committee today. The first option is that we revert the ordering, where decorators are placed before the export keyword. That’s our preference. It would be ideal, it would mean that existing decorators users in TypeScript would not have to do this sort of syntactic upgrade and update all use sites and something that accommodates both styles if you believe that it is purely a stylistic choice is that decorators can be placed before or after the export key keyword and maybe there’s some sort of restriction about exclusive ordering or something like that. That is something we’re open to, but we would greatly prefer option 1. So I’m wondering, what are the thoughts that we have here today?

RBN: I think it would help - there’s a clarifying question on the queue.

SYG: I think Ron answered for me in the matrix and I will say this for the benefit of other delegates, so let me see if I get your reasoning correctly. The reason that the Stage 3 decorators today have export then decorator ordering is for the possibility of distinguishing decorating just the export or decorating just the local?

RBN: Yes, that’s correct. That was the explicit reason that this issue which is a very long issue on the decorators proposal issue track tracker and the reason why it was closed was to investigate the possibility for export decorators being something independent.

SYG: Let me finish. The second part of the reasoning chain is that given that was the motivation for this speculative use case, since then not only have you not seen any demand for the speculative use case, you consider the speculative use case to be a foot gun for the reason you explained. I want to clarify that the foot gun is not – there is no foot gun in the Stage 3 semantics today because there is no just decorate the expert versus just decorate the local; is that right?

DRR: That’s correct. Any future expansion of the syntax where we make distinct semantics based on the ordering would be the foot gun. What we’re saying is our team would be opposed to any sort of growth in the language like that because you would create this foot gun and given that, it seems like we are intentionally trying to avoid what we consider a better syntax even for reasons that are not really valid. We’ve avoided the direction where even though the reason to avoid the direction is not really valid anymore.

RBN: I would also mention that I did do some investigation into what kind of things you might potentially want to do with the decorator that decorated an export and pretty much every outcome was something that was feasible without that being a distinction and clearer because you removed that distinction or something that we would most likely not support within ECMA script at all such as renaming export binding that would affect the static semantic of import and export and having the alternative decorating thing that comes out of that which we again mentioned as being a foot gun and deleting an export which again would break static semantics and something that we intentionally chose not to allow for any other kind of decorators, you can’t delete members, and then there were a couple other cases that we had discussion, some of which are on the topic. Wait till those come up. Found that none of the semantics made any sense and it just didn’t seem like a viable option.

JHD: So it’s possible that the reason the issue was closed was to explore decorators on exports, but we had a long discussion in plenary. We broke into breakout groups. One of them was about decorating ordering. There was a number of folks obviously including myself who believe that the decorator should go next to the thing it’s decorating - for example, if you decorate a class declaration and then you suddenly decide to export it, or if you decorate an exported class declaration and suddenly decide not to export it, it feels really weird and like a conceptual mismatch to be adding or removing the export keyword in the middle of the expression that describes the thing that you’re exporting. So we don’t have to debate that subjective belief right now, but there were multiple reasons beyond just export decorators.

RBN: And one of the things that I found is that we already break that model when you look at something like static. Static is not part of the declaration. It’s part of placement. Export is essentially the same thing. It’s specifying whether or not – essentially an accessibility or visibility modifier that determines whether or not something is visible outside of the module and adding or removing one keyword being after the decorators and before the class or before the decorators, that really – that kind of decision is relatively minor. What we did find that – we have found that kind of keeping these contextually related keywords close together is not only consistent with the language goal, with the language as it stands, you have static async method, we very much wouldn’t want static and decorators and then async and then method name. So again I don’t find that option – either we are expressly broken one direction or not supporting it at all.

JRL: Can you go back to the – if you were talking about the undecorated export being used? You had a decorator foo export something. I don’t understand this example. How would you get access to undecorated foo here?

DRR: Because the whole idea here is as you decorate the export, you are decorating something that is accessible from the outside but locally you’re getting the local. I mean, either within the module scope it refers to the local or within the class itself you’re referring to the local as well, right? I don’t know what it means to decorate the export versus the local. But if you had this sort of weird distinction, I agree with you. I don’t understand this either. It would be bad.

JRL: So this exists only because you’re decorating before the export? Like, if we were to reverse this and it’s export @decorator class foo the foot gun doesn’t exist. If we do it the way we currently specify, you can’t have the foot gun?

DRR: You can also make the specification so that this works the same as the current semantics and also no foot gun. Basically, yes, if you said export at decorator class foo there’s no foot gun today. If you expanded the language such that at decorator export class foo as written here meant something on the outside you get the decorated foo and local to the module you get the local foo, like, just ignore the decorator almost, then that would be super confusing. And then you would not get desirable behavior. And that was one of the things that was floated and we really didn’t understand why you would want that.

RBN: Perhaps it would be clearer if we showed that – put a decorator before export and after to show the two separate locations as being distinct. The idea is if we decorated exports only, this would add a strange distinction.

JHD: Replacing the value is not something I would expect export decorators to do. I would expect them only to be able to change things with the active exporting and making them const or let and rewriting the export name or something and not changing the actual value. I agree with you that those semantics would be very confusing. That’s not what I would expect from the export decorator.

RBN: If a decorator could do that, again, that would break a number of static semantics that are defined within the spec for now imports and exports are bound during module substantiation before valuation occurs. And if you put a decorator on any export decorator on anything that only decorates exports in your module that you can only import it using import star as, or it just becomes this like – I think we have some spec text that we’ve been looking into having these dynamic module records that allow you to have these kinds of things. But it breaks a lot of these static semantics that we currently employ.

JRL: As for use cases here, the one that I actually care about is the ability to import a namespace and modify that imported namespace. What I mean by that currently if you export foo and set it to some value 1 with the let binding, I can modify the let binding at some later point in time inside the module and reflect the value on the outside, whoever imports it. There’s a large contingent of people who don’t understand the semantics. They think it closer to CJS’s exports object and I can directly modify the export object and reflected by anyone who imports the export. It’s constantly used for things like mocking out an import so that the entire graph sees this new mock. Either through Jest mocking through sinon mocking or other mocking frameworks. People continually open issues on either Babel or on SWC which is the new project that I work on that our ESM transform is incorrect. It is not compatible with the mocking library they’re already used to. So one of the things that I wanted for decorated export at some future point is the ability to market export as writeable from the outside by whoever imports that module. Purely for testing. It is a horrible practice to do in production but testing it is a requirement because people expect it.

RBN: So my biggest concern with the idea of something like writeable imports is that I think this is better served by keywords that can be syntactically analyzed as opposed to decorator and the value has to be looked up and you have to evaluate the run time – perform evaluation and this could result in circularities. We discussed this complexities of trying to function decorations and makes nonhoistable. All that apply to the decorator that somehow was able to mutate whether an export was settable and then that doesn’t even get into the possible conflicts with what that would even mean with reexporting things and how that information would flow that today requires workarounds and CJS and seems it works when you are import importing the module directly. If it’s re-export dealing with that is more problematic problematic. I would be more likely to support something having an export setter and export set something name that allow allowed you to do a set like this. But again that requires semantics to flow that through but would allow the semantics to be statically analyze analyzable and run times to perform optimization and perform early validation and reject invalid program for imports and exports that don’t exist in statically analyzable JavaScript.

JRL: Yes. I wanted to show one of the use cases that I had in mind for decorator exports.

MM: So very strong affirm at JHD’s main point that is in the original reason why we rejecting decorator export and accepted export decorator, there were strong reason reasons that were not – that were beyond the ones that were stated in the presentation and one that JHD (inaudible) syntax reflects semantics and the thing that you’re exporting is the decorated class. The decorator is decorating the class. And the issue about whether we’re trying to open the possibility of decorating the export itself or not, sure, I also had in mind we might eventually have a use for that. But even if we never do that, it’s still the case that decorating the export does not reflect the semantics of what the decorator means when it’s interpreted as decorating the class. Now, with regard to the conclusion with regard to what’s being asked in the proposal, I am okay with option number 2. I would have a strong objection to option number 1. And this is despite the fact that in general, I’m against the language philosophy of there’s more than one way to do it. I’m very much there’s only one way to do it fan. But in this case, the decorator after the export keyword reflects the semantics and is friendlier to things that automatically generate and I want to bring up the other issue that I do remember having raised originally which is function.prototype.string.call applied to a function gives back either something that’s unparsable or gives back exactly the original source text. And if I understand correctly, correct me if I’m wrong – for decorated class, the source text begins with the decorator and the result then with the current proposal, the decorated class is still a valid eval evaluable class expression even if it’s decorating a decoration this is due to the language that declaration text evaluated as an expression is evaluable as an expression with the export keyword included in source is not evaluable text either as a statement or as an expression. And therefore code that uses function.prototype.tostring in order to create eval evaluable text being broken by the style advocated here. I think that creates a requirement that we continue to support the decorator after the export keyword.

RBN: I want to respond to this. This discussion came up four or almost five years ago, I think. And at the time, the decision was that decorators are not included in the source text for this. I would have to check the current spec text to make sure that aligns with that position. But decorators won’t be included in static methods either. Today the static keyword is not included in the method when you do a function.prototype.tostring on the method name or on the method itself. So we made this decision for I’m pretty sure for consistency that we would not be including decorators in the toString. I would imagine that would – I have to go check the notes. That is my recollection.

MM: I don’t have a reliable memory on this either. If you are correct, then that certainly changes my perspective on this issue.

RBN: I recall the discussion at one point was around being able to string a class and eval it to recreate it. I said there was still no reliable way to do that even with including the decorators but if you really wanted this, your best option was to actually wrap the decorated class in a function that you to string the function to evaluate it and provided the same semantic to create the source text that you could potentially transfer. I know I recall describing those options.

MM: Other than the static issue I agree is fatal, what other hazards do you know of with regards to evaluating the two strings of the class?

RBN: Other than making sure you have the correct context being important.

MM: That’s always important for – the whole thing about the evaluable function and source text is always qualified by if it’s evaluated in a sufficiently similar lexical scope. That’s always the case.

RBN: Other than if export and default were included, no.

LEO: Just want to make it that no one was asking for different semantics and that kind of side track my attention. So we are not discussing, no one is actually asking for different semantics of what is going to be exported? I hope the committee can agree with not changing that.

RBN: We are not currently asking for different semantics but that is the reason why this issue was closed during Stage 2 was to investigate the option to support different semantics.

LEO: And from that, I’m also supportive of option 2.

DE: I wanted to recap the logic that UNICATS was using for decorators come before export. I think this is kind of consistent with what Daniel and Ron were arguing but basically when you have a class declaration, it’s not quite the same as class expression because it binds the variable. So when you have a decorator, you’re decorating the declaration because obviously the decorator comes before the bound variable. Decorators kind of are around the whole declaration, and hence, for example, it comes before the keyword class. And so the idea was that in principle it would come before export. So I found this kind of first principle’s reasoning kind of persuasive and also it was – might have been not quite represented in some of our discussions in plenary over the years. But at the same time, I’m very sympathetic with the kind of ecosystem transition concerns that Daniel and Ron raise and so I’m in favor of option 2. But at the same time, I’m okay with any outcome here.

RBN: My memory is fuzzy. This is a long time in the process. When I first presented the process to UHDA and worked with the angular team for a long while that came the version of the proposal, we had I thought settled on decorators before export. I captured that information in the issue that this references that was the expectation when I provided the proposal and that when I provided a poor request to fix it back in 2018, that was to amend what was a typo, that it was just missed in the syntax that we presented but was part of the discussions I had. I don’t recall a conversation that was the opposite of that. It’s been many years since we first started this.

DE: This matches my understanding from my conversations with UHDA as well and historically simply a drafting error when writing up the spec text of reversing the order. So history seems a little unfortunate. I hope we can have a smooth ecosystem transition. I mean, towards standard decorators and a lower cost one would be option 2 compared to what we have in the decorator proposal today. But that said, I hope that whatever answer we settle on is properly adopted and we don’t get too much variation and language extensions at least not in a permanent way way, that whatever language extensions are done are kind of opted into with a flag and then hopefully by default and people are using the standard language at least eventually.

??: So there’s two items on the queue remaining and there’s 9 minutes left. Next up we have Waldemar.

WH: An export is something you do to a class. When we were originally talking about this, we also brought up the analogy of when you return a class expression, the decorator should go between return and class, not before return.

RBN: We had this conversation also back in, I can’t recall if it was 2018 or 2019, but I also don’t agree that export is something that you do to a class. The export is the adjective. The class is the export meaning it modifies it and even regardless of the English language terminology for the placement or for that kind of word, it really is still an accessibility or visibility keyword like you might see in just about any other language that has these and differs entirely from return which is a statement. Return putting a decorator before the return doesn’t make sense because you’re returning an expression and the expression or class expression would include its own decorators. Exporting again I don’t agree it’s a thing that you do. It has more effects than just the point of execution because it creates a binding that is recognized during module instantiation not only during module evaluation evaluation.

WH: Different people have different models of it. Some people think that export is part of the declaration. Some do not. And I think we should just agree to disagree on that.

RBN: I mean, one thing that, for example, we put the decorator before the static keyword on static methods and we include the static placement in the decorator context and I could see it being valuable to actually put the fact that the thing is exported in the class decorator context because that might actually be a valuable determination about whether or not the thing that you’re decorating maybe want to replace it with a proxy because it’s being decorated so I can monitor the inputs and outputs. I wouldn’t want to do that, again, arbitrarily. So being able to actually make that decision and have that decision might be useful, in which case, it would still be useful to have the decorators before the export keyword because that information would be included.

DE: So I think the difference between the return case and the export case is that export deals with the bind binding, that’s why it’s part of the declaration whereas return takes the expression as a parameter. Export very much does not take expression as a parameter. It takes a declaration. That’s why I think the decorators are logically mixed into the thing more. And we have a difference of opinion. And Yehuda told me with the original discussions with Ron and have just sort of been thinking around this proposal that I think is completely reasonable. But the higher level thing I would want to do is settle on a common answer.

MM: So I just want to make sure that we’re all on the same page regarding export default. Export default is followed by an expression and simply exports whatever the value of the expression is. If the expression happens to be a class, it might look to some people like it’s a form of exported class declaration. But semantically it’s not. I apologize for speaking out of turn. Please continue.

RBN: I would say that’s incorrect. If you say export default class and provide an identifier, that class has a local binding. It is a declaration.

DE: It has a local binding about the export binding. It doesn’t have the – sorry, if you do an export de default function, for example, these are all declaration forms both syntactically and certain ways to observe this.

MM: So in that case I don’t know the answer to the issue that I’m raising. I don’t necessarily know what my opinion is when you raise the issue. I would hope that we agree if you want to decorate a class that was exported through an export default class syntax, that the decorator would go after the default. Are you also considering as part of your proposal to enable to allow the decorator to happen before the export in that case?

RBN: Yes. Default modifies the export and the class. It says that the export is given the name default that you can actually use as an identifier and it also sets the name of the class as well. So since in our position export modifies the class. Default essentially modifies both. Therefore the decorator should come before export and default.

MM: Is it not the case that you can also say export default and then have an arbitrary expression and then simply exports the value of the expression?

RBN: That is true. But it would not create a local binding for the class in that case. I posted in the matrix the difference between the two.

MM: Okay. So I genuinely don't know where I land on this. But I’m certainly still much less comfortable with option 1, option 2.

USA: Before we move on with the queue, Ron, would you mind eating ten minutes from the next item and extend extending this one or are there any objections against that?

RBN: Hopefully I can get through that quickly. If not hopefully enough time at the end of the last day to talk about it as well. Yes, that’s fine.

USA: So assuming there’s no objections to that, let’s move on with the queue. Next up we have a reply from JHD.

JHD: This was about Dan’s explanation of the export is grabbing the binding. It is true that the default keyword potentially modifies the value being exported - if it lacks a name, naming inference will apply and it creates a local binding, but it’s not a live binding. Effectively even though it creates some convenient stuff inside the module it’s just exporting a value, not a binding.

JHD: And then I have a related question that I’ve been asking in matrix is what happens – if you export default it will create the local binding called default? What happens if you export default a decorated anonymous class? Does that create the local binding with the value or like – I mean, I think there’s enough unfortunate inconsistency here that I don’t think an argument can really be made that I think it’s difficult to make a strong argument in either direction without resorting to subjective preference.

RBN: Yeah, I’m not sure I’m clear what you’re saying here. When you export default an anonymous class declaration -- an export default named binding and no local binding. If you have a decorated class that is also an export default anonymous class, it also would have no local binding and only an exported binding local default. I’m not sure I’m clear on this.

JHD: It makes a local variable called default with that value?

RBN: No, it does not. It creates an exported binding named default. It only creates an export binding. The name export is part of list of bound names of the export module. Or the imported names.

JHD: I’m saying that the -- the export will be named ”default” if you import * it, but there is no relationship between the local changes it makes and the exported changes it makes. I know that if you export let something, that that’s not the case, or, you know, export var something, because it’s a live binding. If you export const something, you can’t observe there is any connection between the local binding stuff and the consumer stuff.

RBN: Yeah, default is primarily observable in the -- in that it also sets the name of the class.

JHD: Right. But only when it’s directly there. A decorated anonymous class, naming inference wouldn’t apply.

RBN: That decorated, it should still get the name. The name should still be assigned. The decorator is still in effect. Assuming no class decorate more between replaces the constructor with something else. The name should still come from the assigned name or the default that’s provided.

DE: Yeah, it’s fine if we want to draw a higher level analogies, like, that export kind of looks like it’s taking an expression. But it’s very straightforward about what’s a declaration versus what’s an expression and what we choose is, like, what kind of higher level mental model we want to attach to that. And we could decide either way.

USA: All right. Next up we have Shu.

SYG: We do? Oh, yes, yes. The thing about -- yeah, can you speak to why existing migration techniques like code mods are insufficient to help TypeScript.

DRR: Yeah, basically there’s always a level of how easy it is to migrate, right? Like, the easiest migration is you upgrade and everything works magically then there’s some level of, okay, I have to switch a flag. In this case, that’s part of the migration. Then having to have users also say, like, I’m also going to run this tool for my code base and what not, it’s okay, except there’s lulls a risk of the code mod not being correct and might lose trivia, like comments, white space, things like that. And it’s a pain that doesn’t actually upgrade the knowledge that’s been built up over the years as well around, like, you know, existing documentation, things like that. That, you know, has the opportunity to still be valid, right? So, yeah, I mean, you can just say run a code mod, done deal, right? But not everyone knows what code mod exists or how to do it or whatever. So there’s a degree to how easy we want that make this and I would like to make this as easy as possible.

SYG: So let me respond to that real quick. So Kevin wrote a code mod just now in the past 15 minutes, super coder there. But I guess what I’m trying to figure out is what I heard are some pretty -- fully generic arguments about the pain of any upgrade path, and I totally agree with them. It’s a question, it’s an exercise in line drawing on how easy we want to make it. I was looking for some color on why does this decorator ordering thing make you feel like the right thing is to request a change here instead of pursuing something slightly more painful like code mods.

DRR: I mean, I don’t know. Like, it’s not just one thing. It’s not just the transition cost. It’s also -- I mean, everything. Right? Why are -- why is the spec even different, right? We haven’t gotten any feedback saying I want this different from our side, and we had this feature for years. But I mean, I agree, right? Everything is possible. Right? Maybe the code mod is perfect, doesn’t lose anything. I haven’t tried it, right? But, you know, I think there is something to be said about just trying to do the right thing in this case for users.

RBN: Daniel, can I also speak to this.

DRR: Yeah, go ahead.

RBN: I also want to point out a couple thing. One, as Daniel said, whether or not you could have a code mod -- you could run on existing code, that doesn’t -- we don’t have a code mod that we can run on the wealth of existing documentation around decorators that, yes, maybe focused on TypeScript, but just could equally apply to JavaScript decorators we could then take the benefit of that documentation. So that kind of change then means that any of the documentation examples that exist on stack overflow would show in ordering that is inconsistent with spec ordering. And then also one of things that I tried as hard as I could and in some cases we weren’t able to succeed was find ways of making it so that decorator authors, the people that write these decorators that are most likely being consumed through packages and libraries, would be able to have some type of forward migration strategy, and, yes, that doesn’t work in some cases. Based on legacy decorators, you get set, based on the fact that instance decorators don’t give you access any way -- any way access to prototype because initializer only runs around construction. But class decorators are the one thing if you only have class decorators, have a 1 for 1 translation, because it can take a constructor in and a constructor out, which means that class decorators are the one thing that exist today that are easy to migrate. However, with the differing syntax, and again, if this syntax is purely aesthetic, if there is no valid reason for us to differentiate exports, decorators before or after export, then we are, again, enforcing every single existing consumer in TypeScript ecosystem the very least to migrate to their syntax, add a churn to their code bases for something that the decorators that they are using potentially required no changes whatsoever, even to the run time semantics to support correctly.

USA: Right, we have run around two minutes left and next up on the queue we have Chris.

KHG: Hi. So, yeah, I think that another thing to consider about these migrations, decorators are changing -- they’re going to change both the semantics and the syntax. And especial away the semantics, it’s something I have found code bases are typically wanting to do in parts, not via a large code mod. You typically want to try to get it massage your code base into as close as it’s going to be afterward. You know, one file at a time. Sometimes especially with some of the other changes. Before you kind of do anything like that or flip the switch, so to speak. I’ve even talked through situations where people want to run both transforms at the same time, and use both specs at the same time because it is going to be so difficult to migrate just straight. So I do think, like, a solution like a code mod isn’t particularly helpful for this style of migration. And like Ron said, I do think this is like a case where given there is no real other reason, it is significantly easier to migrate, you know, the existing style. So that’s just my experience in what I’ve seen so far in the ecosystem.

USA: Right. Next up we have a reply from Shu. Could you be really quick, please.

SYG: No, just skip me.

USA: Okay, next up we have Kevin.

KG: I’m -- yeah, Chris, I didn’t understand that point about code mods, but we don’t need to talk about it here. If you want to go to the delegates chat or something, I would be interested, because I did not understand that point.

USA: All right. Thank you for taking this async. We seem to be out of time, but really quick, do you want to ask for consensus?

RPR: I know there was a request for a temperature check. If you think that’s an essential path forward, we can try and schedule some time for it tomorrow.

DRR: If we have time, it sounds like we have a fairly open schedule tomorrow afternoon, right? Or tomorrow in general.

RPR: There is time tomorrow, yes.

DRR: Yeah, I don’t want to drag this out too long, but I don’t want to -- I don’t to eat into more time. Can we schedule maybe ten minutes tomorrow? Or 15 if you think it would be better.

USA: All right. Yeah, let’s do ten minutes overflow tomorrow.

Conclusion/Resolution

  • To be discussed further

Async explicit resource management

Presenter: Ron Buckton (RBN)

RBN: Back in the November two plenary, we discussed splitting off the async functionality from the ex-plus it resource management proposal to potentially advance separately pending a compromise or consensus that could be reached around the explicit syntax, whether or not we needed an explicit marker for the block scope. So I wanted to bring the proposal back, to discuss where the current -- what the current status is, and see if we’re at a point where we believe we can advance to Stage 3. So I have in here my standard motivations slide that I presented before resource management. And this is applied both to the sync does async versions of the proposal. Essentially, the main motivator for the proposal is to simplify these inconsistent patterns for resource management, provide a cleaner way to handle resource scoping and resource lifetime, avoid a number of common foot guns and lengthy code. And I can dig into many of these examples, but we have all kinds of different cases of return, release lock, close, end, there’s all kinds of different ways of cleaning up these resources that are often inconsistent. Sometimes they are -- they look synchronous but actually are not. So actually, working with these things and the consistent manager becomes much more complicated. Resource lifetime is -- can be tricky. Because you have to manage the handles construction outside of the try-finally block, therefore, it has a lifetime outside of that -- that scope. And ensure that you actually use the handle, but it’s the declaration essentially that sticks around to be further used in your code outside of the try finally. In cases like release lock, not having a consistent way to kind of marry the -- a declaration to its lifetime makes it easy to forget things like releasing locks, later on in your code, so being able to do this and not have to add try finally staff holding is helpful. Also, again, the consistency of -- or the foot gun of incorrect resource ordering, if A were to somehow -- sorry, if B were to somehow depended on A to properly close itself, closing them out of order could result in an exception, and in the case of trying to avoid a scaffolding, trying to to things the right way where resources are handled in the correct order, often requires a lot of complicated nesting that makes code harder to read, it pushes the thing you’re trying to do further to the right as it gets further and further nested. These applied both to the sync and async versions of the proposal. But here is to really kind of get into the meat of what the async proposal provides, I’ll show some motivating examples. One example here is a three face commit distribute transaction system or even non-distributed transitioning, where the commit of that resource or potential roll back of that change requires a period of time where you cannot be considered to block the main thread or you don’t want to block the main thread. This example shows using some type of transaction manager to start a transaction between two accounts where you want to debit an amount from one account and credit to the other account, and if all of these operations succeed, you can mark the transaction as successful so that it is committed at the end. If either of the debit or the credit fails, maybe there wasn’t enough money in the account, maybe the account you’re trying to credit to wasn’t available, then either one of these two options could throw an exception, which would prevent the code from ever reaching the point of marking success. So then the transaction needs to then go through its commit rollback cycle. But to do so requires, again, more operations that may require network requests or file requests, therefore, you want to be able to await those rather than block the main thread. Another example of this might be using something like a writable stream? Node JS allowing you to write data, but then forgetting to call end or the fact that end in node JS looks like it’s synchronous, but there is an event that you can listen to that tells you when it’s actually finished, and maintaining this ordering is -- or maintaining this -- evaluation and scoping is very important, because might in the next step want to open the file, and if you created that writable stream exclusively, then trying to open it while it hasn’t finished the actual commit would be a problem. So making sure you actually have a correct and consistent way of managing that lifetime is important. So we’ve gone through many different variations of this as I described in the kind of history slide in the sync version this was proposal yesterday. So what I’ll show you today is kind of where we settled on the syntax that we’re hoping to use. And that is in the form of using await declarations. So very similar to the using declaration, you can define a using await declaration anywhere you would be there an async context, so this would be inside of async functions or async generators or async error functions or at the top level of a module where open the level await is permitted. Just like the northerly using declaration, these are blocked scoped at the end of the block, there is an impolice wait for any resources that have been registered, essentially for these using await declarations that have been initialized. So in the example of a using await variable, taking that expression, the value of that expression and its async dispose or dispose as a fallback would be then captured at the using await declaration, and then at the end of the block, those Tess pose methods would be called in reverse they were added. We also support a using await there a for declaration head, just like we do for the form using declaration. They are also supported in for of and for-await of, and I’ll get to the duplication await in this statement here in a later slide. So, again, the using await declaration on its own, only let in async functions. These declarations, much like normal using declarations are immutable con extent bindings. They also do not support binding patterns. Just like we do, and they also again not supported at the top level of script by the nature of those the -- fact that those scripts are not async and also due to existing restrictions. Much like using declarations, lifetime is scope to current block scope container and the RIAA style, the resource acquisition is initialization style that we’re using for these declaration allows you to avoid excess block nesting and, again, makes sure these resources are scoped to what contains them. Using await declarations introduce an implicit interleaving point at the end of a block, and this has been one of the major sticking points for syntax as we’ve been discussing it for the past several years. The syntax we’ve chosen has a couple things. One, we know -- we do not have an explicit marker at the head of a block. This is something we discussed at length with Matthew Hoffman and Mark Miller. Throughout various iterations of this proposal, there were discussions about using being an expression context, which could make it very easy for it to become buried somewhere within evaluation. But the using await declaration itself, because it is a statement, it has to be essentially at the top of that -- or essentially at the same level of nesting as any other statement of that block, so in well formatted code, it’s easy to recognize where using await statement occurs. Some choices that we’ve made around evaluation is that if you exit a block before you ever evaluate or initialize a using await declaration, there would be no implicit await. This is my -- my contention or my position here is that code that you don’t execute shouldn’t have side effects essentially. And a use await declaration that you actually never initialize, having that cause an implicit await could introduce wrapping would be an unfortunate consequence if that were required. And we don’t currently require that if you have, for example, an if statement that has an await in one block and not in the other. Having that mandate that -- or having a mandate that you await if you never encounter code that has an await keyword, we don’t feel would be the right semantics. So, again, we choose to if you never execute an await initializer, then we don’t actually perform an await. However, if you evaluate a using await declaration and its initializer, we will await fetch this value is null or undefined, which is is the conditional case we talked about before. In these cases, you have essentially evaluated this await keyword or you’ve reached a point of execution where the await keyword was -- excuse me, where the declaration itself was initialized, so as you step through this, we’ve essentially indicated there is a registration of an async interleaving point that will happen. So, again, this was a long standing requirement from Mark about implicit interleaving point, they be marked with await or yield, and as it stands within the specification today, every single place where you -- where we await is explicit in some form so that would be the await expression itself, a for-await declaration, after some of the discussion that we had since splitting off the proposal, Mark was willing to drop the requirements with some of the discussions we’ve had and some compromises around things like always -- always awaiting when we hit -- when we evaluate these declarations. And there are some -- if there is the case where you are in a code base that wants to have a more specification of these sections, it’s perfectly feasible to use comment-based markers and a lentor to perform validation to ensure that you’ve commented you’re doing so. This is -- this has often been the case with things like having an empty block or having implicit fallthrough. In addition, there’s potential for editors to use neithers like syntax highlighting, editor decorations, inlay hints, et cetera, to highlight the presence of these interleaving points. So actually, there is -- so Justin has a topic on the queue that I think is worth addressing at this point. Justin, can you go ahead.

JRL: Okay, I didn’t mean to interrupt you if you wanted to go.

RBN: This is a good point.

JRL: So we -- we have using await as the keyword marker that this is going to schedule an async disposal will happen after the current block is finished. But we’re not awaiting at the point of the using. We’re waiting at the end of a block. So is await the correct keyword to use here, or should we be doing using async instead to highlight that there’s no asynchronicity yet? It will only be asynchronous at close?

RBN: So the position that I hold, and I would have to ask Mark what his specific -- to clarify his position on this as well, but while it’s -- while there’s potential for us to use something like async using, using async I don’t think is valid because async isn’t actually a reserved word in an async context, so it’s perfectly feasible to have an async keyword, so that would break -- that would break potential for refactoring in those cases. So using await indicates an await will happen. Async has no connotation for that. You can have an async function that you never await. You can invoke it and never look at its results, probably bad practice. And no await occurs, the await is the subscription operation that will happen. Async is a confirmation of a syntactic confirmation that will occur. So sink indicates a thing that you can await explicitly yourself. And every instance of await in the language today indicates that an await will happen at some point. For-await, for example, has both an await during -- when the -- it enters the block -- or enters the loop as it starts to read these resource, but it also has an await at the end of the block, so there is both an await whose consequence is immediate and an await whose consequence is potentially deferred to later. So we believe that using await is the correct syntax to use for this case. And we have a fairly strong preference for that. If we were to choose syntax using the async keyword, it would mostly be async using. Because there is less potential for conflict, but there is still also conflict with overlap with error functions that introduce a complexity around cover grammars. So that’s where my position stands right now. And I think Mark said he’s happy to clarify his position as well.

MM: Yes. So first of all, I agree with everything Ron said, and it does highlight all of my major points. I want to address one additional thing, which is the projection of the awaiting to the end of the block, you know, was the syntactic hangup. The reason -- one of the reasons why I’m happy with the proposal as is with the await keyword being at the using point, even though the awaiting does not happen at the using point, is that even for the synchronous using, explicit resource management, the users -- you know, people writing code and reading code will rapidly come to understand when they see a using is that it’s projecting the interleaving some of cleanup code, some interleaving, but interleaving of additional cleanup code at the end of the block where -- or the end of the block is not otherwise marked. So you already have to start to understand when you see a using that that implies some official computation happening at the end of the block. I think the using await extends that notion to simply say, well, carry the meaning of the await to what happens at the end of the block. I think that whether rapidly become intuitive. And the reason not to use await rather than async is exactly what Ron said. Which is async does not mark an interleaving point, does it no mark that an interleaving point necessarily happens.

JRL: Okay.

RPR: Kevin has a plus one for Justin. And then WH.

WH: The first time I saw it, I expected using await to await at the point of evaluating the using expression. And it was a surprise to find out that it doesn’t.

RBN: Do you have more to that, or is -- this is -- do you have more to ato this?

WH: What I’m saying is this will become a constant point of confusion, and it will become an education problem for incoming users. I don’t have a better solution to it.

MM: No matter what we do, there will be some confusion. There’s no one -- we’ve been around this enough to know that there’s no one answer here. That will not violate the principles of surprise for some programmers. So it’s a question of choosing which rude surprises we’re imposing. That doesn’t -- that observation doesn’t decide the issue. But it -- but I do think there’s no option that avoids any unpleasant surprises for anyone.

WH: Yeah, I’m just saying that we may have a problem here.

MM: Yeah.

RBN: I would say that -- so if you consider the intuition of for-await of, we don’t await the expression that iterate over. We check if it has a simple async iterator. This was an intuition people were very easy to adapt to with for-await. I don’t see it an intuition people have trouble attaching to using await either. You don’t await the expression. You await the sequence of that expression. And I think Kevin has a reply to this as well.

KG: Yes, just to second what WH said. In discussion of this proposal on the repository, at least one person has already expressed the intuition that they expected using await to perform the await there, so I agree if we to choose using await, we are opting into confusing everyone forever, which maybe we’re okay with and maybe there’s a story that we can tell that makes it okay, but I do think that we should be aware that we are opting into confusing -- sorry, not literally everyone, but a very large percentage of readers for the rest of the language's life. And I really think async using would be less confusing. I know we will never avoid some confusion for all programmers, and we’re just choosing what is less confusing. I really think async using would be less confusing.

RBN: I tend to disagree. And I also want address a point that Waldemar said. He said it would be a constant point of confusion for developersful I don’t think that’s the case. I think for a person choosing to use a using await declarationish they’re already going to have to have done some investigation as to what this actually is to know they’re using it with the right things. They’re either looking at documentation more API in question or they have looked at how the syntax works. So I imagine that this might be a point of -- a point of confusion the first time that it’s used, not necessarily every time someone goes to reach for it. So -- and again, to the other point, and kind of to reiterate, regardless of the position on who it might potentially confuse, because as you said, there’s a possibility of it confusing anybody. Again, there’s a possibilities somebody could have been confused with what for-await does, yet, if we want to remain consistent with the language as it stands today, async as a keyword is an indicator of a syntactic transformation that allows you to use the await keyword, and that would even with be 2 case with an async do, but it does not itself imply it in way that occurs, whereas await as a keyword indicates that an interleaving point occurs. I think it’s very important that we make that distinction because if we had async using that is inconsistent with the language with respect to how those two key words are used, so I would find that more confusing than the alternative.

RPR: MM?

MM: No, SYG first. I put myself on the queue because I saw that I need -- I will need to respond to SYG.

SYG: Part of what I heard -- sorry if I’m mischaracterizing is folks whether get used to it. Same with what Ron just said, there will be a one-time cause for confusion, they will have to figure out the novel syntax of using together with await what that means. But I just -- intuitively, I do agree that the intuition that people have around await is not that there’s an interleaving point, but there’s an interleaving point where await is, that’s at higher risk of confusing a higher number of people, I think. Like, the symmetry you’re presenting is true for a certain population, and that population is us. I want us to be mindful that is that the right population.

RBN: Yeah. And I will get to point about reading code as well is that anyone who is reading novel syntax will mostly need to reach to understand what that novel syntax is, which is another reason why we were using novel syntax for this.

SYG: But that cuts both ways. Why doesn’t that apply to using async?

RBN: Well, again, it’s consistency with other things within the language. There’s another point that I was going to get to, but I’ve lost what that was. Yeah, again, my preference, as stated, is to try to maintain that consistency as much as possible. And so --

SYG: Well, okay, and to reiterate again, what about the consistency of the intuition that await means awaiting, like, the nearest right-hand side expression that gets evaluated right there? That’s the intuition that is causing confusion, and this breaks that intuitive consistency, and what is the response to why it’s okay the break that consistency? Or why it’s more preferred?

RBN: I remember the other point I was going to get to. But I’ll get to that if a moment. The other one is, if the await were in an expression position, then, yes, I would imagine that it is -- that that makes sense, that it’s an immediate thing. But it is in a portion of a declaration that or a statement, which, again, for-await has other interesting semantics than the immediacy of the await. That make it enough to need to pay a bit more attention when you break out of a for of, that -- or a for-await of, that there still actually an await that occurs as you exit that code. So that is one case. The other is that there’s actually more than a few cases in other languages that have similar capabilities that use either async or await. Depending on those languages. In Python, I believe, async wait is their choice. I don’t believe that -- I can’t recall, actually that is an issue you might have with -- or off happened, I can’t recall what Python years use of wait is. In languages like C#, await is used for a similar declaration and they actually use the ordering await using in their case, but that doesn’t work for us because of the fact that using is a valid identifier, so that would break existing expressions or be -- result there a cover grammar that seems potentially unnecessary and confusing. But they use await using because they consistently use await for all the same places we use await within JavaScript and they consistently use async for all the same places we use async within the language. So I tended the lean towards C#’s design because it is more aligned with what we use within the JavaScript language.

USA: Mark?

MM: Okay, so, yeah, there is a remaining clarification that’s worth stating. It’s only a clarification. Everything said so far including about my position is accurate. The clarification is that it’s not that people will get used to it just from scratch. It’s that given the mind shift that people already have to invest to understand just using, the synchronous using, is they’re already having to project the understanding that there is implied code execution at a later closed curly, which looks otherwise like an unmarked closed curly. So they already have to understand that there is synchronous code execution happening there. The -- so my argument that people will get used to the meaning of await here as being projected rides on the fact -- on the assumption that people will already have done the investment in projecting some implied code execution to the bottom of the block. Additionally, I want to say that this conversation has made me, I think, able to better characterize what the two possible confusions are, and this does not, by the way, decide the issue, but I think it brings clarity, which is if we say using await, there’s a possible confusion of thinking that there is an implied await where there is not. If we use async using, the possible confusion is to not know that there is an interleaving where there is one. So it’s, you know -- it’s a, you know, type 1 versus type 2 error, and then the question is which is more dangerous. And that’s not clear. From my perspective, they’re both quite dangerous. My intuition is that missing an interleaving point is more dangerous than thinking there’s an interleaving point where there isn’t any, but that’s very tentative and I can argue that in both ways.

RPR: I’d like this get to Waldemar’s topic and then maybe get to Daniel’s later on in the presentation. Unless it’s specific to this Waldemar.

WH: People read code. Sure you may take extra care to learn about what these things are when you write such code yourself, but when reading code, the obvious interpretation is that there is an await point at the point of evaluating the using expression. I am also a bit uncomfortable about being so dismissive of user confusion arguments.

DE: Who do you think is being dismissive of user confusion?

WH: I don’t think that’s a good line of discussion to proceed into.

RBN: I mean, I think if your point is around my discussion about things being not something that you have to continuously relearn versus -- my concern was that the statement you were making was I think much broader than I believe it has to be, and perhaps my restatement was much simpler than the concern is. But I stand by the fact that I don’t believe that the -- any confusion that someone might have would be a -- a constant thing that dogs them every time they open the code. I think this is a behavior -- or a feature that they could learn.

WH: By “constant” I meant there is a constant supply of new people who see this for the first time and are confused by it.

RBN: Yeah, and, again, my intuition is that regardless of the interleaving point concern, that Mark has discussed, that as with -- this is going to be the case with any novel syntax.

WH: I still think there’s a problem there.

MM: Can we all agree that there’s a problem there, and that both sides are primarily -- both sides of this debate are primarily motivated by trying to alleviate user confusion, the problem is there’s two different user confusions and each side of the debate only alleviates one of them.

RBN: Yeah, and one thing I will say is that the -- the points of -- the current syntax and the choices that we’ve made are an intent to find a compromise between the -- these two sides as well as the -- some of the additional complexity and cost that would have been associated with having to indicate the specific block. It just -- we’re trying to, I guess, find a compromise or a middle ground that we can have here. And I’m more than willing to come back and address this as we get towards to end, but I want to make sure I’m able to cover did rest of the slides so we can come back to this topic later on, if I can. And can I defer your comment to late or is it something you want to bring up now? Daniel?

DE: Yes, please, defer it.

RBN: So I can come back to that at the -- as we go on in a little bit further in the presentation. The next thing that I know might be potentially contentious is the use of using await in for loops. So much like a using declaration at the head of a for declaration, you would be able to use a using await here. This introduces a constant binding. This is not a per iteration bindings since per iteration bindings only apply to mutable bindings. So it’s -- and the way the spec is currently written and the current semantics are these constant bindings are only evaluated once and are scoped the life of the entire loop. So it -- follows those same semantics. In the case of for of and for-await of, these binding are per iteration. Which is consistent with how these variables are defined on each loop. And we’ve made a distinction on how for-await of and for of work such that they are consistent with how -- sorry, using await is consistent with how for-await works as well. In that -- actually, I think I may have a more detailed slide on this shortly. No. So the -- there’s -- we’ve discussed this a little bit on the issue tracker as well, but there is this potential for it seeming somewhat repetitious to have a for-await and the using await in the same statement. And this is -- we’ve chosen the direction we have here because of the semantics for await work and forrate way work. They’re similar there how that evaluation is performed. When you perform a for of on a async iterable that is not also -- there’s not also defined as simple iterator, it will throw. So this is essentially a run time check of the input. I’m sorry. Let me reshare that. So there is -- Yeah, so there is a -- essentially run time type check of the thing that you can’t iterate an async iterable in a synchronous for of. And for-await does an explicit check for the AsyncIterator before then falling back to the sync iterator. But that’s a check against the presence of that method on that object if it doesn’t have AsyncIterator or iterator then again it will result there a ru time exception. Similarly, the using declaration looks for a dispose method, and if it not present whether result there a run time exception. Using await, it looks for an async dispose -- simple async dispose method, and if that doesn’t exist, falls back to dispose and if that resultsr result there’s a runtime exception. Each one has a run time type check that is performed against the values that you are working with. So we opted to make it very explicit, what you’re intending to do here. If you want to forrate way for a synchronous resource and thus any potential asynchronous resource that is not -- that does not have asynchronous dispose should be an error, that case would be made evident by not specifying an await many the using await declaration. Because, again, we are opting into tracking the async dispose in the using await case but not in the normal using case. So we’ve opted for that specific -- for that specific semantics for this. Yes. And that’s the -- my final bullet point here, is around the for-awaits of asynchronous disposable for block is only synchronous when an await occurs. We won’t magically enlist an asynchronous dispose if you opted not to leave in the await, just like we wouldn’t automatically make -- we don’t automatically make an -- enlist an asynchronous dispose in a normal for loop. We believe that having these specific syntax and the explicit preference to opt into which behavior you want is important. And finally, async disposal semantics are roughly the same as the dispose semantics with you for async context, so if the initialized value is null or undefined, then we don’t do any registration. Well, I should clarify, we don’t register any -- we do perform a registration, but we don’t throw an exception. The thing that we register is that some await then async interleaving point must occur at the end of the block, so when it’s initialized to null or undefined, there will still be an await later on. If it’s neither null nor undefined, we will attempt to read either async dispose or dispose, and if it doesn’t exist, it neither exists we throw. If the method that we find does exist but isn’t callable, we throw and we record the value in the lexical environment so ensure we perform cleanup at the end. So async disposal interface. This is the spec interface similar to the iterator and AsyncIterator interfaces -- spec interfaces and the disposable spec interface 36 it describes basically an object with a simple async dispose method. With the expectation that invoking that method indicates the caller is done with the object that its lifetime has ended and cleanup should occur. This would be used by semantics of the using await declaration and the async disposable stack class. When exceptions are thrown or a rejected promise is returned from async disposed indicates the -- most likely indicates the resource could not be freed. A simple async dispose method should perform necessary cleanup for an object and all of these shoulds are essentiality same as the definition for a synchronous disposable. Avoid throw an exception is fits called more than once but that’s not required, a async dispose should return a promise, and this is again consistent with how AsyncIterator, you can write a non-conforming AsyncIterator that just returns a synchronous iterator and these spec will still do awaits in the right places. Because it expects that things -- it will still await the results. There is an open issue on the naming of the symbol. Right now, the symbol is symbol.asyncDispose, which matches the parallel symbol async -- asyncDispose is to dispose as asyncIterator is to iterator. But this doesn’t match the naming convention for non-symbol methods that exist such as the dispose async that’s provided or built ins like atomics wait async, etcetera, where async comes at the end. So for now, we have chosen to match the symbol AsyncIterator naming convention, yet this doesn’t match the behavior that async iterate does, because AsyncIterator expecting to be called and return an object and the async is what we whether call, and I don’t know if -- if anyone on the committee has a preference for changing asyncDispose to change the symbol name to change its order or if I should maintain the current direction. I’d like to give just a moment for anyone to chime in if they have any particular concern.

MM: I prefer the current naming.

RBN: Yeah, I do as well. I like to try to maintain the consistency as much as possible.

??: I do too.

RBN: To move along here, async disposable stack, I’ve again discussed this as recently as I believe the November plenary when it was still part of the synchronous dispose, the synchronous resource management proposal. It has a very similar API. You can -- it has a get or the let you determine whether it’s already been disposed so you don’t have to depend on exception throwing semantics to determine whether or not it’s valid. The -- you can -- it has a use method that an async disposable or disposable just as the using async declaration does. It returns that value. It has an adopt method. Similar to the disposable stack, as well as defer and move. The main difference is the dispose async method forms asynchronous resource management and has the symbol async dispose method, which is essentially an alias. This is similar to Python’s async exit stack, python has, again, this parallel between the synchronous and asynchronous versions of these two mechanisms and this emulates that same parallel. As I mentioned with views, it will accept a async disposable or synchronous disposable, adds it to the stack. And we do the same type of fallback semantics. If not, we fall back to the dispose method. It allows adopt foreign and non-disposable resources or potentially to wrap an existing disposable resource with a custom cleanup mechanism that may or may not call into the built-in -- or the defined dispose method. It allows you to pass any value, and it returns that value. Again, this is to primarily allow you to adopt non-conforming APIs and work with existing code and may even be extremely useful for cases like integrating with async context. And defer is, again, similar to goes defer, but it’s designed to work with async operations and allows you to defer an asynchronous function that will be e-- will be invoked when cleanup occurs. As an -- a couple examples of these just shows when you call stack use, you pass in a resource and that registered and its value is returned, allowing you to place a stack.use in expression spaces as the declaration for the disposable stack could in this case be a using await stack equals, in which case that declaration again still at the top of the -- or still at the block scope statement list, but then it allows you to nest these things later on. And, again, adopt and defer work similar to the sync dispose mechanisms and working with async call backs. Move -- I won’t spend much time on here. This is designed to work the exact same way that a normal disposable stack move does and for the same reasons. Since we don’t have the concept of an asynchronous constructor today, I took the existing plug-in host class example from the sync dispose -- sync resource disposal to show this could also be use in async case where I might have, again, some async cleanup that one of these needs to perform, and if an exception occurs in are process of doing any of this set-up, then we should perform that cleanup, if those exceptions occur, if we reach the point of stack move without an exception occurring, then we know it’s safe to move those resources off and allow -- into a separate stack that can be disoffenced later and allow the stack that is registered to screen up. I’ll get to my conclusion slide and get back to the queue. What I’m hoping to do is seek Stage 3, which I discussed this in November session. And the other that I’m seeking to do if we advance to Stage 3 is merge these back into a single proposal to simplify is process of doing spec review against the ECMA 262 specification. I already have a PR up for sync version roof the proposal and a PR for the async version as I did for the sync one, it doesn’t really make it easy to manage that. So that is the other thing I’m going to seek to do. So with the few minute that are left, I’d like to go back to the queue for any other open items we’d like to discuss before requesting advancement.

USA: Sorry, there is four minutes. First up, we have DE.

DE: So I want to repeat my previous announcement that these proposals have an impact on the web platform, in particular this establishes a new protocol, which you guys may want to tie into. I was happy to see some kind of initial comments from Domenic and Ana about this. You know, Domenic saying that async resource management is definitely important for web platform integration, and that some of the proposed integration points that Ron had spelled out made sense. I think it would be great to get further web platform input here, because, you know, we -- the window for setting these protocols is closing. With this potentially going to Stage 3 now, so please, if you work in an organization that works on the web, work with your co-workers on giving feedback here. But I don’t think we should block on this because, you know, it’s up to them if they want to give that feedback.

USA: Next up we have a reply from Kevin.

KG: Yeah. To second that not just the web, but anybody anyone who works on node or similar run times, but even, like, cloud flare workers or whatever sort of environment, if you are defining things that might conceivably be disposable, that would be great feedback, and, like, particularly node, I think, because it has a lot of APIs. Getting feedback on -- I know Ron had a few APIs that he characterized has being particularly well suited for disposal or async disposal and it would be great to get feedback from maintainers of those APIs or frequent users to say if that’s something they want in the current form. I think it’s great, and they should want that, but it would be nice to hear it from those people as well.

RBN: I don’t know if anyone from monoable or TC53 is on the call currently, but I did present this to TC53 a few weeks back and there was some definite interest in it -- at the very least a sync version of the proposal. So it’s up to them to weigh in on their -- if they have use cases for the async version. But I definitely have seen some positive feedback from that group as well. I will say that I know that we are just about out of time for the day and if necessary, we can -- as we discussed during the decorators discussion, is potentially take the ten ministers we lost from that and tack that on for tomorrow.

USA: We could go over by like 5 minutes or so, if you think you can get the whole queue by then.

RBN: We’ll see. It seems like I haven’t been able to read the matrix, but it’s been fairly busy on that discussion, I’d like to hear what Shu has to say.

SYG: Okay, I’ll try to be quick. Mark, thank you for the for the error characterization i found that illuminating. I want us to get on the same page of the tradeoff space without discussing which way is better. I agree there is confusion on both sides and I want to see if we can get on the same page on what confusion we’re trading off. I think my contention, along with Kevin’s and Waldemar’s contention of using await, I think what we’re trading off is that seems to confuse more folks at scale. There is this somewhat loose intuition that await means awaiting something very close by syntactically in the text. And using await doesn’t have that intuition and that’s confusion, and that seems to be more -- maybe that’s some surface confusion, but whether what’s the the case, that that has more potential, we think, to confuse more people at scale as first-time readers and people are like, maybe you shouldn’t have this here because you don’t have an interleaving point and then authors have to respond actually no, there is no interleaving point right here, et cetera. And your contention that async using without await, that the confusion that causes is a deeper semantic confusion that the await is the only keyword that ought to signal there being an interleaving point, even if it’s not right here, and not having await would be a possibly deeper, more harmful confusion, even if it affects fewer people. Is that a fair characterization?

MM: That’s a fair characterization. I want to endorse the entirety of your characterization.

SYG: Thanks. We can hash this out later, but I wanted to make sure we get the trade off space in the discussion. Thanks.

WH: One other thing that wasn’t mentioned in the presentation is that using await looks for either a dispose or an asyncDispose. And regardless of which one it finds, it will await the result of that. What is the rationale for that?

RBN: Again, this aligns with the same mechanisms -- or the same behavior we chose for await, that will it will look for async iterator and if that does not exist, it will fall back to iterator and if falls back on the same iteration we had for null or undefined and define to simplify code paths if I had a loop that some of those resources -- if I have a for loop or for-await loop and I’m iterating over those resources and if I I need to dispose each one, if I have to inspect to determine if it is sync or async and bifurcate my code based on that, that would be a loss when it comes to developer productivity and the developer experience. Having the implicit fallback with the expectation that a using await will always await, even if it finds a synchronous dispose matches with the semantics we have in other places and has a much better developer experience.

WH: My question wasn’t about null or undefined. It was —

RBN: I understand that, but I’m stating that the reason that we pushed for the behavior for null or undefined as it is is to have that same developer experience, to avoid bifurcating your code based on what that result is.

WH: I wasn’t asking about null or undefined. I was asking about why await the result of a synchronous dispose.

RBN: Weight the result of the did synchronous dispose is a compromise that I made as a requirement from Mark.

MM: Yeah, the -- the similarly, we have debated in committee before whether await of a non-venerable, non-promise should proceed immediately or should always impose a turn boundary, and we decided to always impose a turn boundary for very good reasons. The normal await, forget for-await, the normal await always imposes a turn boundary. The -- having something that either precedes synchronously or imposes a turn boundary based on dynamic data considerations is extremely dangerous. It’s much more dangerous than any of the misunderstanding we’ve talked about.

WH: Okay. Fair enough. I accept that answer.

USA: Last up, thank you, Waldemar. Last up we have --

MM: So I think that’s -- I didn’t hear my name, but I think that’s me last. I want so say I support this going to Stage 3. Plus one on this. I appreciate Ron’s patient engagement with us during the whole process. It resulted in us better understanding the deep meaning of our own objections, so thank you very much, Ron, for all of that. And, yes, I’m in favor of this going to Stage 3. And also I should mention that we have publicly stated and we maintain that the async using versus using await is not a blocking issue for us. I made clear what my very strong preferences are, but we stated that we will not block the other syntax choice, and we are still of that position.

USA: Thank you, Mark. Assuming that I’m audible now, Ron, I think now might be the time to -- yeah.

RBN: Yes. At this time, I would like to seek consensus for advancement to Stage 3.

WH: With which syntax?

RBN: I’m sorry, let me clarify two things. I would like to seek advancement to stage 3 with the using await syntax and I was looking for it, there was an open issue on the naming, that we go with the existing definition for symbol.async dispose. Those the two open questions I’d like to make sure we’re agreeing to when we agree to advance.

USA: All right. So I believe Waldemar’s clarifying question. We can give some time for consensus. We need more words perhaps, explicit.

SYG: So the syntax thing, can I respond real quick to the asking for stage 3 on the syntax thing. So I am also not going to die on the hill of the other syntax. I take that to be a good thing and we’re hoping of -- maybe there is a path forward here to really -- like, I certainly haven’t thought as deeply about the syntax as Mark and Ron have in arriving at using await. But I guess the whole point of the async using syntax is it’s less confusing to beginners and that seems somewhat empirical and I wonder if we can get Stage 3 on the condition of I don’t know, I’m going to at least run some very informal internal surveys with practitioners and other folks on preference to get some more data, at least for my own edification, if nothing else. And if other folks have practitioners that they have access to get some more data on -- like, we agree on the tradeoff space, one is possibly more confusing for beginners and one is possibly more pernicious confusion for experts. I would like a better handle on just how confusing it is for beginners. We’ve heard one anecdote that someone on the repo said it’s confusing to me. I want a better handle on it. If neither side is willing to -- in neither side wants to block on the syntax, I wonder if we request have a condition to do more work for one plenary session so come back in March.

WH: I will support that. I am unenthusiastic about this proposal at this time due to the syntax confusion. On the other hand, this may turn out to be the best choice that we have. I just don’t know at this point. So I’d like some more time for us to figure that out.

RBN: I am definitely agreeable to a condition on advancement of this investigation. I will state that I would be willing to agree to async -- to using async using as the syntax, but it is far from my preferred Kloss and I strongly prefer the syntax as it’s been presented. But I’m not -- I am not going to block my own proposal if that becomes the direction that we have to go.

WH: I don’t disagree with you. I think that out of the syntax options we have, what you have chosen may be the best one. I just want to understand the confusion and its consequences a bit more before approving it.

RBN: And I will also state, and us mentioned this this the history slide, there’s a been a lot of syntax churn with this proposal over the last few years and we explored a lot of different avenues for what this syntax looks like, and the direction we have right now, which is RAII style declarationings, the using for-await syntax has fallen out from that, but I’m perfectly willing to hear feedback on Shu’s investigation.

DE: I’m very happy that this proposal landed at the RAII side, and I think that’s probably the most important syntactic decision. I like the idea of doing another investigation as long as we’re time boxing this to one meeting. And I’m really happy that we’re in the situation where nobody on either side wants to block and we can just sort of look into this in a sort of disinterested way. Still, if we don’t come to a clearer conclusion next meeting, then I think we should probably go ahead with the current proposal.

RBN: All right.

SYG: Yeah, I agree with that. Yes, champion’s choice if we don’t come to a clear conclusion.

USA: Do we have support for advancement given that condition?

DE: I’m explicitly supporting conditional advancement to Stage 3 with this condition of resolving the one syntax question.

MM: I support that as well.

JHD: I can just add a point of order question. So assuming that that gets consensus, then, it’s not in Stage 3 yet, so is proposals table wouldn’t be updated until the condition was met, but that would be the only thing required to update it, correct? I want to make sure I understand that.

DE: The full conclusion leading to Stage 3 would happen during a meeting.

JHD: During the next meeting, presumably.

DE: Yeah.

JHD: Cool. Thank you.

DE: I don’t know how we want to represent the event in the proposal table.

JHD: We usually don’t.

RBN: I was going to say, do we -- are there any dissenters, because I think we asked for advancement and haven’t heard anybody speak out if they have concerns at this point.

RBN: Then the other question I had was for my own peace of mind and to make the review and -- the review process and the process of creating 262 tests and everything else related to this, once conditional advancement -- once those conditions have been met, I would also like to get consensus on kind of merging these two proposals back together, maintaining them independently is much more complicated, and I’d like to avoid that, if possible.

WH: I would be in favor.

MM: I would be in favor also.

SYG: Yes, please, as an implementer, please merge them.

RBN: Thanks. All right, I appreciate it. I’ve already been trying to maintain two separate branches to keep these up to date, and this would be much simpler. Thank you.

USA: Thank you so much, Ron. And everyone else. Thanks, everyone, for the productive meeting. And the positive note that we ended on. See you all tomorrow.

MM: Thank you. So, Ron, please do update the conclusion in the notes for this one, because obviously it’s a slightly unusual conditional advancement.

RBN: I will do so. Thank you.

USA: Thank you. And thank you to the note takers.

Conclusion/Resolution

  • Regarding ‘Symbol.asyncDispose’ (current) vs ‘Symbol.disposeAsync’, consensus was to continue with Symbol.asyncDispose for the name.
  • Conditional Advancement to Stage 3 pending outcome of investigation of ‘async using’ vs. ‘using await’ syntax. Condition to be resolved no later than the March plenary, with the currently proposed ‘using await’ syntax as the default choice if we don’t arrive at another conclusion. (For now, the proposal will stay in the Stage 2 section of the proposals repo, as that repo does not represent conditional advancement.)
  • Following Stage 3 advancement, consensus is to merge the “Explicit Resource Management” and “Async Resource Management” proposals to simplify the work involved in reaching Stage 4.