Skip to content

Latest commit

 

History

History
630 lines (324 loc) · 153 KB

mar-31.md

File metadata and controls

630 lines (324 loc) · 153 KB

31 March, 2022 Meeting Notes


Remote attendees:

Name Abbreviation Organization
Ashley Claymore ACE Bloomberg
Rob Palmer RPR Bloomberg
Waldemar Horwat WH Google
Michael Saboff MLS Apple
Istvan Sebestyen IS Ecma International
Chris de Almeida CDA IBM
Rick Waldron RW Salesforce
Josh Blaney JPB Apple
Jack Works JWK Sujitech

Extending built-ins

Presenter: Michael Ficarra (MF)

[first couple slides are missing notes]

KG: So here's two examples of the ways that we might handle this problem: using internal slots directly or calling methods on this. And, as Michael said, these are not the only possibilities, but they are instructive to look at. So just to talk about those in a little bit more detail, using internal slots directly means that every built in method is going to reach into the internal slots that it needs. Regardless of like if it could in principle be implemented in terms of some other method, as we add new methods, they will continue to reach into internal slots, whatever subset of internal slots is required. Now of course in principle, they might not require any, we're not saying every method must use an internal slot, just that they will use an internal slot if the data they need is contained in that slot. And, pros and cons, obviously the main benefit or one of the main benefits of using internal slots directly is that it's the simplest thing to do in engines. But it does make subclassing harder because now to have a correct subclass you're going to have to override every method, even if the obvious implementation of the method is something in terms of other methods. So for example, ‘union’, you might expect - the way that you as a user would implement this on a class will probably be in terms of the iteration protocol and so you would just write the code that you might have assumed that the base class would already have. But no, every subclass would have to write every method. And as we add more methods, they would have to override - Sorry. I shouldn't say every subclass. I should say that a subclass that wants to have a behavior, which is not directly reflecting the internal slots. A subclass that just is trying to add functionality, it would still be able to do that without overriding a method.

KG: So another approach that we've talked about is having what we've taken to calling a minimal core. This is where you have some subset of methods that are held to be the sort of core methods. So for a Set, this might be iteration, membership querying, and perhaps getting the size of the set, and then other methods that can be implemented in terms of those methods would be. So, you know, ‘union’ would be done in terms of actually getting symbol dot iterator on this and then invoking it repeatedly, just for example. Some of them might need to call multiple methods, so intersection as we'll see later would probably want to call multiple of these methods. But of course, this isn't really a guarantee, because in the future, we might have some new Behavior, which only really makes sense to implement in terms of the internal slots directly, without having to use the minimal core. Methods which wouldn't be as efficient or perhaps not even possible to implement the behavior that we want using the core. So the minimal core methods would just be sort of a set that expands over time, but hopefully expand slower than every single additional method that we add.

KG: Again, some pros and cons. The major benefits are, for subclasses which correctly override the core, they don't need to override other methods. So if you have a Set subclass that overrides symbol dot iterator and so on, then you would get, you know, union and intersection for free when we added those, rather than being broken by us adding those. The major downsides are that it's a lot more work for engines. There's a lot more user visible points in the implementation. And as a consequence of that, we would have to fully specify all of the algorithms for all of the things - we couldn't just say, well, engines should do intersection. You will have to specify exactly how that looks because the fact that we are calling user overridable methods means all of those details are observable.

KG: And again, there are other possibilities. One example that we've talked about on the left is to check to see whether the internal slot is present and use it if it is, but otherwise called minimal core methods directly. So this would make Set.prototype.union ‘reusable’. You would be able to take set prototype union and transfer it to a set-like that doesn't actually have a set internal slot and have it work correctly because your set like lacking the internal slot would fall back to the minimal core implementation.

KG: Or we could have an opt-in flag on the constructor that says, which of these two behaviors to do. Just as examples; there are other possibilities. I also want to emphasize here that I'm not suggesting that every bit of any design must be the right answer for every question. So for different classes, we might choose different answers. And in particular, for different positions on classes we might choose different answers. So we might just, for example, say that we want to use internal slots on this, but for something like Union, where you expect the argument to be a set as well, you would not reach into the internal slot of the argument and would instead use the public API. So, hybrid approaches of various sorts are possible.

KG: And unfortunately, we can't just look at what's in the language today. There's really not many places where we have made affordances for extension, but there's also not really many places where we would have wanted to. In fact, the main places where there currently are affordances - or one of the main places - is in the Set and Map constructor, which call the overridden add and set methods. And we have some slides later, which we'll probably not get into, that just demonstrate that that is completely and totally broken. So the only precedent we could look at is obviously a bad idea. So there's not really - we can't really say, “oh just do what we're already doing”. And unfortunately, we have to do something because in particular Set and Map are making this urgent. We have a Set collection in our language that doesn't support any of the primitive set operations. You can't union a set. That's just silly, and we'd like to fix it, but in order to add those methods, we have to choose designs for this.

KG: So let's talk a little bit more detail about just the two designs that we presented before and one in between design. So the simplest design as mentioned is just using the internal slot directly on both the this and the argument. This makes for an extremely simple algorithm. You just reach into these slots of this and other directly and you say, well take the intersection of these lists. And you don't have to specify how to iterate those or anything because that's not visible. So you can just leave it up to implementations to choose whatever intersection algorithm they like, modulo some details about preserving order and what the notion of equality is and that sort of thing.

KG: Or you could say that you use the internal slot on this but you invoke the public API on the argument. And this gets a lot more complicated. On this slide and the next, the parts that are written in gold are where user code is observable. I should also say, intersection - it might not be obvious why this is so complicated. Set intersection, the abstract operations is complicated. So for the set intersection, the Big O optimal way of doing it is to look at which set is smaller and iterate over that set, casting out things that are not contained within the other set. If you just iterate one set without looking at sizes you potentially have much much worse performance because, you know, for example, if this is a massive set and the argument is the empty set, you could get away with doing basically no work at all. But instead you have to do work that's proportional to the size of this. So you really do want to look at which set is smaller first, and then iterate that one. So that makes for a fairly complicated algorithm because you have to have two branches for which of the two sets you're iterating. But anyway, then the fundamental implementation isn't so complicated. You're just getting the size of the other set and then either iterating this, or the other, using the public methods. And when you're iterating this, which I guess is the second Branch, that's not observable, but you are invoking the has method from the other set, which is observable. Or conversely, if you are iterating the other, that's observable, but the set membership on this is not observable. So again, much more complicated than the previous one.

KG: But not as complicated as using methods on both this and other, where in addition to the fact that you are repeatedly invoking the iteration protocol or the has method on the other side you are also doing that on this. And you're not just doing it repeatedly, you are doing it in an interleaved way. So you're alternating between getting the next element from one set and invoking the has method on the other. So this makes the runtime behavior extremely observable.

KG: It's theoretically possible for engines to optimize these. They could, after getting whichever of the methods is relevant for the branch that they have determined they are in, they could test those methods to see if those are the built-in set membership and the built-in set prototype iterator and I guess check that the set iterator prototype dot next are intact and all that, and then choose to just do it all internally, rather than actually call any of these methods. That's a little complicated. It means that there is a definite slow path for the case that any of the things are not intact, but it is theoretically possible.

KG: So, all right, so these are just examples of what the implementation would look like, or I should say what the spec text roughly would describe for a couple of these approaches. Of course, it's not just a question of, for these internal slots, deciding to use them everywhere or nowhere or something like that. There's also other questions, the biggest one, of course, being which built-ins if any should have affordances for extension. It seems to Michael and I that the higher level things, so that's basically just map and set right now. It maybe makes sense to do. I have ever wanted to subclass map or set in my life. If we added other data structures in the future, queue or whatever, we might reasonably support subclassing, or I should say extension, not necessarily only subclassing particularly, on other data structures. The lower level things, it may make less sense. Like I think trying to support extending arraybuffer was a mistake and we should not - regardless of the decision that we make about the other questions and this proposal, we shouldn't try to make any affordances for subclassing arraybuffer in particular. Probably not regex either, or indeed, most things in the language today. Promise is maybe the only in between one where it's not clearly high or low level. But that's just our opinion.

KG: There's other questions. What are we doing about symbol.species? And symbol.species for map and set is kind of an interesting case because map and set do in fact have symbol.species defined, but nothing looks it up because none of the methods on Map and Set so far have created new collections. Those APIs are right now very, very small. So, if we decide that we don't want to respect species on, you know, set prototype Union, that's fine. But then there's the question of, do we just unconditionally construct the base class, or are we using this dot constructor or what? And then there's also the question of, if we are adding more static constructors, like Array.from, Should they be constructing the base class, or should they be constructing the this? And as mentioned, previously there's the question of handling arguments, which is slightly different from the question of how to handle the receiver.

KG: But basically, what we're getting at here is we need to pick a philosophy. It doesn't necessarily need to be a blunt philosophy. We could have some Nuance in it, but we've got to make some decisions about how we are intending to support extending built-ins - how or indeed whether we intend to support subclassing built-ins. The next couple of slides are a little bit more opinionated. So as we were talking about this, Michael and I observed that you really can't try to enforce additional invariants with a subclass right now. No matter what affordances we have, if you are just trying to override methods so that, e.g., your add method on your set subclass rejects arrays or whatever, a consumer can just bypass that. They can just call the built-in method from the super class directly and skip whatever checks you're trying to do. And of course, we might add new methods in the future. Even with the minimal core design this is a problem because we're not proposing the minimal core would be like some sort of guaranteed never expanding collection and those new methods would bypass any invariants that you are trying to enforce. That's only a problem when the language updates, but it is a problem. So you basically just can't actually enforce invariants with a subclass. If you want to actually enforce invariants, you basically need to use a wrapper class. So something that holds for example, a set in a private field and exposes add and iteration and so on, on your wrapper class, and then there's no way to just bypass the invariant enforcement. The only thing that subclassing in particular ends up really being useful for is adding new functionality. So not something that tries to enforce any new invariants. But you could, you know, reasonably add a filter method to Sets, or whatever, Because that's not trying to do anything that would affect the contents of the internal slots, or sorry, I should say, would try to impose invariants on the contents of the internal slots. Basically, you don't own the internal slots, the built-ins own the slots. This is a fundamental consequence of how the language works. And as a consequence, we think that, because you really want wrapper classes if you are trying to do any actual enforcement of invariants, you want to have a set-like, rather than a set. So we think it makes sense for the built-in methods to accept arguments which are set-likes or X-likes or whatever, because wrapper classes are sort of clearly the way to go in at least some cases and it would be a shame if those couldn't interoperate well with the built-in classes.

KG: We also think, in part as a consequence of the previous slide, that the minimal core design we presented is just too complicated. It makes it really hard to grow the language in the future. We are suddenly getting a bunch more trade-offs any time we try to extend language. We're now making trade-offs between implementation freedom and the user API, and also you have this problem where if the minimal core expands, now the existing methods which are observably specified in terms of the smaller minimal core, might become incoherent because there would be potentially case where we would want to use the new method and we can't change the implementation for the existing methods. So growing the language becomes a lot harder. And there's not that much benefit because overriding built-in methods doesn't really get you much, because as mentioned, you can't enforce any invariants. So the minimal core is only a benefit to people who are trying to override built-in methods and we think that in most cases you would want to use a wrapper anyway. Yeah, so those are our opinions on some of these things to kick off discussion.

KG: We do have a bunch of other slides that we called an appendix talking about the ways people are currently subclassing Set and Map, which does happen, and in more detail about the existing affordance with the set and map Constructors calling .add or .set respectively on their this and why that's a bad idea, which I'm happy to get into, but we didn't think it was necessary for the main presentation. So, with that, I'd like to open it up to discussion and see what thoughts you all have.

RPR: Thank you. All right, then. I think we're ready begin and the first person on the queue is JHD

JHD: Okay, so The subclassing question as Kevin and Michael laid out - it affects a lot of things. Observability, can harm performance optimizations and can create all sorts of weird things with proxies and stuff like that, it can affect “borrowability”, so it is sort of related to defensive code? it may not be a common practice, but it is a practice that affects a lot of people transitively you can do like Set.prototype.add.call, and you can cache the method and right we've talked about callBind earlier in this meeting, So that that's a relatively common thing in my code, at least to make sure that I don't call the add method on the thing people give me, but that I instead if they give me a set that I use sets functionality on it directly. So for example, if you tried to enforce an additional invariant via a subclass my code is just going to blindly ignore all of your invariants because it uses the base class methods, and that sort of dovetails into overridability. It's sort of the only way that the current set of subclassing things works is if everyone is just kind of faithfully calling methods directly on your object, which is admittedly a common thing to do. But there's nothing in the language that requires it and so it sort of it means that you basically have can have no guarantees when you are making a subclass and that philosophically sucks, but that also might suck if you're interested in your code providing guarantees. So, an idea that I've been tossing around for a while, based on Bradley's proposal a while back (the Set and Map, key and value constructor hooks) is essentially if subclasses never had to overwrite any methods, and instead at construction time could provide hooks or alternative implementations for internal algorithms - then it feels to me, like, everything else would just naturally work like, idiomatically, it would work with robust defensive coding patterns. It would work still just fine with everyone just doing .add on Set subclasses, and so on. I just kind of wanted to throw that out there - not as a proposal idea - but to more like - It seems to me like that would be a way that would check the most boxes and answer a lot of these questions - and I've talked about this with you, Kevin and Michael, a few times - is that I was hoping to get sort of the room’s thoughts on that as a general approach. For example, just to give a concrete example before I stop speaking, if I wanted to use SameValue instead of SameValueZero in a Set, when I construct the Set I could pass an options bag that has like a “something” property, that function would you same would be a predicate that provides SameValue semantics instead of SameValueZero semantics. And then at that point Set.prototype.add.call on my set instance would use my SameValue semantics because that instance had already been constructed with those hooks. So that's like a concrete example to think about

KG: Yeah, so I'd especially like to hear from implementations on that.

MF: Well, I think I can prime this discussion a little bit. We had some slides where we talked about implementation freedom versus extensibility, and how depending on what you make observable you limit what kinds of implementations can exist. Some of the examples you gave are actually good examples of how that implementation freedom will be limited if you were to replace that fictional comparison operation that happens in Sets with SameValue. That would make Sets actually have to do comparisons against all their values. So this would mean adding to a set would be a linear operation. So log time if they're backed by a hashing operation. Whereas if the implementation wants to change a Set from SameValueZero to SameValue, they could just change the hashing algorithm. By giving the freedom to change it with this conceptualization of a Set, you now have limited what kinds of conceptualizations the implementation has to align with.

JHD: For that specific example, instead of providing a predicate, you could provide instead a transformation function like “use this value for comparisons” so that you would only call it once for each item and then if you wanted SameValue semantics, you'd make all strings with a prefix in all numbers, you would convert to a string and I don't, you know, you could come up with some implementation here.

MF: Yeah. For certain data types, we probably could come up with something generic enough that we wouldn't meaningfully restrict implementations, right? But that's the thing, I don't think we can do that generally for every kind of data structure and operations we want to support. I would still like to hear from implementations to say similarly or if they have other opinions.

MM: Yeah, so I want to endorse JHD’s and Bradley's point about the construction time extension hooks. Most abstractions won't need such extension hooks, but when we do want to extend in a robust manner and not by wrapping. The existing precedence, we have for that which are actually quite strong as the strongest precedent is the proxy handler. The Handler itself is basically a bag of methods and all of the traps go to the methods that were provided. You go to the Handler that was provided at construction time, and there's nothing exposed on the public API of the instance, the other precedent that we have in a proposal, not in the language currently Is the eventual send handled promises system for extending promises. Trying to extend Promises by subclassing has just been a disaster. It basically doesn't work for any interesting extension you want to do with promises. Whereas the hooks for promises which are very much modeled on the proxy model. The hooks by constructing handled promise, and having Handler, are proven to be extremely flexible and expressive and extremely robust against misuse. So I just wanted to both support and point it out those precedences. I have a question later.

SYG: So I agree with Michael that same-value seems like a bad idea as a hook for some of the reasons that Michael said, but I think practically if this is depending on the granularity of the extension hooks that you want and for something like the same value, that is not just a fairly low level, but also kind of diffused out the calls to something like how to compare each each item in the set will be diffused out through basically all methods. And to add a user observable point there, in addition to the complexity and having to reason about that stuff. That is an easy way to ensure that the new things are not going to be optimized anytime soon. Like, if you want to draw the precedent to proxy, the precedent for performance is that it remains slow, and it's going remain the slow path because we're not comfortable thinking through Exactly What is like—how to optimize it in a way where like what observability things can we take advantage of to optimize and how do we build - How do we build the bailout hooks? Should those assumptions change? Like if you have An override in the Constructor that is as low level and as spread out as something like same-value. That seems like a bad idea to me. If the hooks are fairly higher level And at a lower granularity, that may okay. I'd have to look at it. but I feel like maybe constructor hooks would encourage. High granularity, lower level, Hooks and speaking from. And feels kind of similar to something like RegExp exec like there that there's a thing that you could have, I guess, hooked. And it just seems like a bad idea to hook.

YSV: Um, so I guess I'm kind of echoing what SYG’s saying on the Firefox side. We are rather weary about extending built-ins and it's a good thing that we're discussing like, what's our holistic approach here. I can't speak too much about what Constructor hooks would look like because I would need to see a proposal for what concretely that would mean but we've had very bad time with proxies and we have also not optimized those. I have similar concerns to SYG about allowing user observable code on low-level dispersed parts of the engine. So my feeling is like I would have to see like really much more concrete. The other thing is, I've noticed that there are sort of two sides to this discussion. Well.. there's three sides. There's the, how do we grow the language? So we have this concrete problem of unions and sets and set likes. We have also how do we do this with an implementations such that implementations can optimize and can do it in a reasonable way. And then there's a User side of this story which is how do we make it? So that it's something that is usable by, you know, JavaScript programmers, and from our side, the, we need to have a really good argument for why we would open up more powers to the user side, because the amount - every time that we've tried to allow extending built-ins. We've kind of gotten bitten by it. So unless there's a really good motivation. I would support something like, what's on the slide right now using said data on both this, another because this is really contained. We know exactly what's going to happen here. It doesn't open up to library authors. There is problem with the fact that it doesn't open to library authors because it reduces freedom and we do have expert users who might want to use something like that. But this is something we can be certain about allowing us to extend the language and do it safely. So yeah, that this is our current view on this, but of course, this can change depending on getting into the specifics of how we're extending built-ins.

WH: I agree with YSV. Trying to support subclassing can introduce a lot of harmful complexity into the spec and the language. It's preventing optimizations.

KG: So does that extend to wanting to access internal slots on arguments to methods, not just on this? Because like as mentioned previously, with this approach, you can't pass a set-like as an argument to intersection.

YSV: Yes, I noticed when you brought it up that we should include set likes and I think that, that, that kind of an argument is something that I think has a good basis. Like, you know, will allow people to extend by doing wrappers. But in order to allow that we need to use user observable methods for accessing those arguments that are being passed into a built-in function. So I feel after you gave that argument. I was like, ‘okay I can see that’. But previously, when we had been discussing this slide deck internally we had Sort of settled on. Okay. We need a really good argument. That could be one.

RGN: I just want to express agreement with one of the later points in the slides, that encapsulation is effectively the only robust means of extending built-ins, especially when you're concerned about invariants. And that probably we should document that to avoid having this conversation again. That it should be considered idiomatic that if you want to, for instance, change the Set equality comparison or the Map equality comparison that you do so by creating a new class which internally leverages the built-ins but does not subclass them.

MM: okay, I want to triangulate into agreement with both JHD and RGN. Wrapping does encapsulate and is robust. The construction time hooks also are robust. So both of them are consistent with the general requirement for encapsulation and both of those You're able to enforce new and variance regard to the hooks. I want to mention something with both while YSV and SYG, which is proxy has a although it's a positive example with regard to the robustness from the outside world of hooks. It's a negative example, with regard to performance because we made a mistake that we don't need to repeat the mistake that we made with proxies. And, we argued about this mistake. Maybe it was justified for proxies, but we don't need to repeat it. Is that each time the proxy traps it looks up the method from the handler fresh and that's consistent with thinking of the Handler as an object. If you think of the Handler more, like an option bag of optional hook functions, Then what the proxy would do at construction is sample the option bag once and the reason that makes so much of a difference is not the overhead of looking up the method. The reason it makes so much of a difference Is that way you have a guarantee at the moment of construction about what defaults are not overridden and then you can optimize those into the Fast path. So we could have had a much more efficient proxy system for proxies in which few traps were handled If we had adopted the other approach. So I think we should keep that in mind and keep the the hook approach on the table and examine it afresh. With regard to YSV’s realization that the wrapper arguments to built-ins should be protocol-based rather than slot based. I agree with that, but I want to say that it's actually much stronger than that, which is the protocol based one is consistent with practical proxy membrane transparency. If the set union reached into the internal slot of the argument without a fallback, then it would break practical membrane transparency, which I would consider to be a disqualifying consequence. I also, by the way, want to reiterate? What I think everybody here has been coming back to, which is subclassing is overrated. But there's with regard to the split between the core methods and the other methods that are built on the core, the wrapping case actually gives a mild argument to preserve that two level structure, which is the methods that are defined only in terms of calling other methods are methods that are generic or methods that are defined. Find only in terms of protocol only in terms of protocol, not in terms of the underlying slots. So, wrapper would only need to replace the core methods and the wrapper could reuse the supplemental methods directly without having to re-implement them. If we lose that by going to a flat structure, that's okay with me, but I wanted to point out, that that is a reason to preserve the two level structure. But I think that's all I have to say.

SYG: By my understanding, I kind of disagree that the fundamental mistake that prevents proxies from being easily optimized is the Fresh lookup every time A trap is needed because in my experience what ends up happening is like, you're not going to have, you don't want a combinatorial explosion of fast, paths, that slices and dices that I have overridden set property, but I have not overridden, get property or getOwnProperty descriptor or whatever. Like you have basically a fast path that is everything to fall. I can assume everything as it is as if he were an ordinary object and you have another path. That is everything is slow and does everything basically exactly. as the spec says, because we're not smart enough with respect to security bugs to try to be clever and optimize some code paths but not others. like in my experience, that's what code bases end up doing when they optimize, you have something that completely optimized because it assumes, like everything is default and you have something completely unoptimized because there's some hook in there somewhere. And if we got rid of the lookup rules so that the lookups can be cached ahead of time. That doesn't get rid of complexity problem and that complexity problem is what gets in the way of somebody putting in the hours to have an optimized path. So what I'm saying is on paper I think what Mark says is true. In practice I have never seen that actually work

MM: Okay, the I'm going to defer to, to SYG from the implementation experience. There's There's some particular scenarios I think, possibly could come up that are not the combinatorial ones where you basically see how they're used in practice and decide what things to optimize for rather than optimize for the entire combinatorial space.

SYG: Agree, that is that is a possibility and it has come up and there are trade-offs there too. You could end up adding finer-grained performance cliffs, which could be bad for user experience, but could be good for some workloads. But there's a lot of nuance here for deciding basically, but yes, agreed that caching ahead of time might make carving out specific combinations that are deemed important in practice the easier to optimize yeah, and

MM: I think that is, in fact, the case, if you look at proxy usage, but you'll see that there are some partial handlers that would have been worth optimizing for if we had given ourselves that ability. So we should at least give ourselves the option for future such handler patterns.

SYG: Again, the I think the counter-argument there is that in practice like that sounds attractive on paper. in practice, I think it means that it ends up meaning that we live for years with complete slow paths due to the existence of any hook.

KG: Just to MM’s point, without commenting on how important this is for optimizing and whether it would have mattered for proxies in particular, we have moved to eager caching. So we had a change in 2017 or something where the next method from the iteration protocol was looked up once eagerly at the beginning of a for-of loop and then invoked repeatedly. Where previously we looked it up and invoked it every time. And in my slides, in fact we have the same thing, where you can see the first line of the else I'm imagining that we would capture the ‘has’ method eagerly. I think regardless of approach we would definitely cache things eagerly, which makes optimization possible. And I do think it would matter more for stuff like sets than proxies because there are so many fewer points that like the difference between - I think having one fast path and one slow path for sets is basically fine. Because it is weird to override has but not iteration, for example. So fewer people would be exposed to the case where they fall into a slow path because they've only overridden one thing that would in principle allow it to be optimized. I think that doesn't come up as much for stuff like sets.

PFC: I saw in the slides earlier that there aren't that many examples of affordances or examples where an affordance would make sense. So I'd like to invite you to take a look at Temporal.Calendar and Temporal.TimeZone. I hadn't really considered constructor hooks before but I think semantically you can think of Calendar and TimeZone objects as constructor hooks for the other Temporal types, like ZonedDateTime and PlainDate. I think these are good examples of where an affordance for subclassing makes sense. For example, one design we tried during Stage 2 was to treat the calendar or the time zone as bags of individual methods, that are cached and consulted separately from each other, but that didn't really make sense in the end. This was the model we landed on. For obvious reasons, I'm following the outcome of this discussion very closely. In the implementation efforts that I've been involved in, it does seem to make sense that you have a fast path and a slow path where the fast path is when your object uses only Calendar or TimeZone objects that are supplied with the implementation, and the slow path where your object uses a Calendar or TimeZone object that was created in user land. In the Temporal champions group, we have heard feedback from JavaScriptCore that this pattern is difficult to optimize. So that is another data point. But we have a plan to address this with a normative change that I'm hoping to present at the next plenary in June.

KG: Can you say more about the normative change?

PFC: Sure. The idea of why it's difficult to optimize, is that currently, if you want to take the fast path, you need to check — this goes for Calendar and TimeZone both, but I'll just talk about Calendar for simplicity — you can check that your calendar object is a built-in one, and you also have to check that Temporal.Calendar.prototype hasn't been modified from its original state. That latter check in particular is difficult. You also have to check that the calendar property of the Temporal object hasn't been accessed so that somebody could like stick a property on the calendar object that overrides a prototype method. For JavaScriptCore it's important to be able to easily determine that a calendar is built in, and hasn't been modified. One idea that we came up with is that we have built-in calendar instances be frozen and they have own-property methods on them that shadow the ones on their Temporal.Calendar.prototype. Another idea would be that the built-in instances are frozen and that they inherit from a BuiltinCalendar class that inherits from Temporal.Calendar, and BuiltinCalendar.prototype is frozen. So this is a fairly complicated discussion. We have like three pages of notes on it, in one of our meeting notes documents from the Temporal champions meeting. So let me link the issue thread: tc39/proposal-temporal#1808 There you can read more details if you're interested. I'll just stick it in the chat as well.

SYG: We've talked a lot about performance and I want to get feedback from committee and the Champions or from the practitioners who either sub class today, or would like to subclass or extend built-ins. How much is performance a motivation for extending a built-in versus completely wrapping or completely overriding every single method. How much is performance a motivation for doing that? And how much of it is is the Simplicity of being able to reuse an algorithm that you already know and is implemented.

MF: I think that if we want people to use something, anything that we add to language, whether that’s a feature or an affordance like this, they're not going to want to use it if it can't be made performant. So I think we should continuously be considering whether something can – and in practice will – be made performant. And if it cannot, that should be a disqualifying factor.

SYG: So if performance trumps expressivity, then I think that argues for the internal slots. At least on this. but if on, if not also on arguments, then I think it makes sense to have a notion as we do for a raise for on an ad hoc basis for like sets versus set-likes. instead of yeah, instead of assuming that sets actual sets have have extension points. I think that if we decide expressivity should trump performance for adoption. I think there's there should be brought online mean. I hope that that adoption is the end goal here and whether we think performance is the thing that the, where we think that expressivity is the thing that trumps it that might guide our decision.

KG: I suspect that most people who are creating set-likes either by being an actual Set and trying to override methods or who are just like, creating a thing that has the appropriate methods on it would care more about expressivity then performance, because most code is just not on the hot path like in general. It's always nice to have things be fast, but usually, for most code, the thing that you care about is that it cleanly expresses your intent and the performance of that particular piece of code doesn’t end up mattering very much because it's not on the hot path and certainly, this is like reflected in the code that I see in the wild, that people write code so that it is clear rather than writing code so that it is fast most of the time, which I think is the right call most of the time because most code is not on the hot path. I do think the most important case by far is the performance of the non-subclassed case, like 99.9% of uses of Set.prototype.intersection are going to be on Sets, or at least are going to have a set as the receiver. It is less clear to me that the argument will consistently be an actual built-in set. Like I think it would be really a shame if you couldn't pass a set-like to intersection, but you know, you could always convert things to an actual Set first. It's just a shame to have to do that.

SYG: that matches my intuition, at least for folks that would choose to extend built-ins that then it's for the case obviously, because they're missing something they want to do, they want to express in a clear Manner and for those cases expressivity is more important than staying on the fast path. What was I going to say? I think that points to a corollary in that we should Aim to make Our standard Library as small as it is. That the the base cases where, you know, the built-ins be a good trade-off in some local maximum of usefulness and expressivity. So that there is less need for folks to want to extend, that's probably the first line of defense and then where we should extend where? Where, it's where we noticed that extension points are often desired like in sets here that we add them sparingly. It's my only. Guidance so far. I'm kind of not seeing a broad design strategy, we could take for all built-ins other than let's not add hooks that look stuff up on this.

KG: Yeah, that sounds right to me. I did want to just talk about, you mentioned, like we should try to have those hooks necessary for people to do extension. People are doing all sorts of weird shit, and I don't think we can reasonably have hooks for everything without making the hooks ridiculously general. We have some examples. This is a set that instead of merely silently doing nothing If you try to add something which already exists, it will throw an error. Or we have a - this thing that's not even a map and just has a completely different interface that they have extended Map for some reason. Or this is a set except it is doing Unicode normalization on membership, or Unicode equivalence checking on membership… there's all sorts of weird things people do. I think trying to come up with an appropriate set of hooks that isn't just the full set of API is likely to be difficult.

MM: I just want to say that there's a way to phase this that I think is robust, which is there's nothing that you can't do by wrapping. And anything that you do by wrapping remains correct, if we then add construction time hooks in the future, so a conservative way going forward When introducing the abstraction is to not provide the the construction time hooks or any other built-in extension mechanism, but to just design it so that people who effectively extend by wrapping are having a good time, and then you can see later based on experience what things might be worth adding a construction time hook for.

ACE: To get to meet. It's just a response to several people said that in particular, what you say Kevin about whether Union would take an iterable, then corrected yourself to say, No, it should take a set like though person.

KG: Sorry, intersection should take a set. Union can reasonably take an iterable but intersection, as I mentioned, the algorithm really needs to know size.

ACE: Right, but what I was going to ask was if if do considering I can build the, The Constructor for set takes an iterable. It seems like that would also be an okay overload to support because if that's what a lot of people might end up Just doing anyway, like they have something that they're custom set That isn't specifically set light like, it doesn't, you know, whatever that specifically means. I guess it has a has method. So we have a “hasable” inside of size or length. It just seems like it would be okay for it to take an iterable instead of even if that just means the first step is consuming at rabaul and put it into set internally if that's just what someone would do for both the on the outside, doing new set mine. Like maybe that the performance wise maybe just getting that over and done with internally just consuming their to ball and then from that point on, there's no more actual kind of observable hooks, you then get a fast path of having a really tight Loop rather than constantly reaching back out to a user land hook, which you know, could be Crossing a C++–userland boundary even though it might be algorithmically slower, Maybe it's in practice. Faster. And you don't need to introduce a new set like protocol.

KG: Yeah, that's an interesting point. It is very hard for me, not to think that big-o performance is the thing that matters, but maybe that's wrong. There are several methods where - so what's an example - take Set.prototype.subset, the obvious implementation of that never iterates the argument at all. You are only doing membership query in the argument and if I think it would be I think, regardless of what we actually specified here engines would want to have a fast path that does that, at least, in the case, when everything is built-in sets because it's - it's just ridiculous to actually iterate the argument if you don't need to, like it's potentially much slower in the case that someone passes you a set-like and if you internally iterate the whole argument to create a set. I mean my problem with this is that if someone passes you the empty set and you ask isSubsetOf and that takes like a long time because, well, no, I guess empty set is a bad example. If someone passes you a large set, where this is the empty set or a set containing 1 element. It really seems a shame to like have to do all of the work of iterating the argument. But maybe the JS/C++ boundary actually does dominate up until like ridiculously large sets. I'm not sure.

MF: Okay, so we've talked to a bunch about what our preferred extension strategies are. And I wanted to try to answer another one of the questions that were listed on one of the final slides. About what we do with Symbol.species. What's our strategy there? It sounds like my reading of the room is that we generally agree that subclassing is not a preferred extension mechanism. There are some proponents of the passing replacement behaviour strategy, but it seemed like implementers were pretty strongly against a strategy like that – at least early feelings indicated that. So if that's the case, do we generally agree that we shouldn't design new APIs to respect Symbol.species? If there’s anybody who doesn't feel that way, I'd like to hear from you.

BT: I know JHX has talked about this previously.

MF: Okay, we can follow up with JHX, but it sounds like nobody wants to speak up in support.

JHD: So yeah, I think that my sense is that we all wish Symbol.species had never existed. We all hope we can get rid of it as much as possible, but understand that web compat might drastically limit that. And I think if we're in a world where we can't get rid of it any further than we than we have already, I don't think that we should make a blanket statement about what we will do with species, moving forward. Because I think that in each case, we're going to have to weigh what will long tail consistency look like? In other words, if we're looking at something where we want to add 10 methods and the end result will be that the preponderance of things doesn't care about Species, and then if you're using Species for the other stuff, you're probably doing something wrong then in that world for that built-in. I would say “avoid species on all the new things”, but if we're looking at something like maybe Array where everything is using species, it seems weird. Unless we're planning on adding like, like, there's already 20, or a methods 30 array methods, or whatever. It is. Unless we're going to add 20 or 30 more that don't use Species, my light intuition is that it would be better to keep using Species on new array methods. I think that's when we kind of need to do ad hoc based on our expectations of changes to a particular built in.

KG: So, would you say for things that don't currently have any methods which respect species - so I'm thinking specifically of map and set. You would be in favor of not respecting species on any new methods added there?

JHD: Yeah. I mean, I want to take a closer look at it than this, like, early morning, rambled comment. But my based on what you just said - yes, that seems like a good place to avoid Species in new methods because there's not a preponderance of consistency that we would need to look at.

KG: Yeah, that makes sense. This feels like it ends up being kind of silly to me, because were I designing the language right now, I think Map and Set are the only things I would put species on, because I wouldn't want someone to be making array subclasses but it's reasonable for map and sets classes. But yeah, it's one of many silly things in the language.

MF: I want to clarify for JHD and the room. KG comment about Set and Map not having any Methods that respect Symbol.species now is because there are no methods that create new Maps or Sets. It's not because we've decided in the past to add a method and have it not respect species. Yeah, not like there's precedent either direction.

JHD: I'm sort of assuming that the direction moving forward is if we had a time machine, we would erase Species, but since we don't and assuming we can't erase it on a particular built-in, then we should take the absence of them as an opportunity to not add them. Does that help?

MF: Yeah, I understand.

JHX: Yeah, I think the species is not very successful. Actually I recently did some historical research about it and I find that it’s from smalltalk and even in smalltalk it also has many problems. So agreed that species has many problems. Not only the performance problem, but also it but but I want to say that the motivation, which is to solve some common cases in subclassing. If you want to create an instance of some class. So the problem here is if we removed it, we need to find some replacementsp that achieve that goal or it's just making the extension of builtins impossible.

KG: Yeah, my position is basically that subclassing of built-ins in the sense of like actually subclassing is basically impossible anyway, or is possible only in very limited circumstances because your subclass doesn't actually get to enforce anything about the invariants that you might want it to. And at that point, you probably want to just have a wrapper anyway, and if you are going to have a wrapper, or if you are going to override all of the methods, then you can just create instances of yourself, and you don't need to worry about species because if you are implementing your own union, you can just determine whether you want to create a base set or a subclass for yourself and just invoke the constructor that you actually think is appropriate. You don't need a hook point in the built-in union because you are redefining union.

SYG: I don't want to keep talking about performance if there are other topics.

KM: How often do we went subclassing? I always found it kind of strange that you could… especially in like the regex you could like, change a very low level part of how all worked. but like, how often do people do that and expect those functions the outer functions to still work. If that makes sense. Like, it seems to me like that kind of subclassing is just sort of flawed, even in other languages, like if I changed something and C++ like that, I would not expect that the outer thing will work without changing how the actual outer thing works.

KG: Effectively, what do you mean by that?

KM: Like changing how like if I wanted to change, how exactly what's in some like, very fundamental way like changing how matchworks, or something have to change something about how matchworks subtly at least to get it to like work with work with your new concept. If that makes sense like it's not like adding stuff on top. It's like doing like changing a very small component individually without changing all the high level operations of the class as well.

KG: Well, I can say that I have definitely seen multiple implementations of like frozen set that just override add, delete, and clear to throw. So I can say that people in JavaScript at least try to do that. It just like, it fundamentally doesn't work because someone else can always just call Set.prototype.add.

KM: Sure, I mean more like but those are sort of like the end level abstraction, I guess in a lot of ways. ways. It's sort of more when you change how like, I everyone's like yeah, it's sort of like if you change how add works then or like get works and expect like, do you expect iteration to still work and you do that. Like, if supposing that iteration used the get operation on a set. Like I would sort of expect that like that would There's a pretty decent chance that when I'm writing that version that like I would sort of expect that iteration wouldn't quite work the way that it used to work and would have weird issues, is that? I don't know. I don't know if what I'm saying Makes sense.

KG: Yes. This is more to the minimal core versus not minimal core Design. Like you're saying, in other languages do people have this intuition that they can override piecemeal?

KM: Yes, I think what I'm sorry about that, yeah, I would expect that like in most languages. You don't think that you can modify like em and em and still have X work. typically, I mean some languages do have it work, but in general, that's difficult to work consistently because people can modify em and like wildly different ways.

KG: Yeah, that sounds right to me.

KM: Okay, then my topic is already covered.

SYG: I think what KM is saying is for things for which it seems like at least there is some semblance of an algebraic understanding like Set. You maybe think that a minimal core ought not break the outer methods as he said but for things where the algebraic understanding is literally just the instructions. This is going to execute and the invariants for the properties that happened to hold because of the instructions that the default thing executes, like there is no way I think there is any reasonable expectation. you replace like exact and have other stuff work other than just trial and error. So I think that to me is the design Criterion that separates low-level and high-level. Like there is, at least some broad, agreement and understanding of what like algebraically a map and a set is. A regular expression engine etc, what it with that should be like, I don't think we can communicate that in a clear way for it to be hooking one.

KM: I think I would largely agree with that. That kind of makes sense. Yeah, if you're trying to expose a low level component Effect to have of like a things. JavaScript works. It's going to be a lot harder for someone to kind of plug in play with. In the middle.

AC: So this is clearly won't be news to anyone on the call. Whenever I've wanted to subclass almost anything, It's only because I want the kind of nice chaining of having things on the Prototype. Whereas if we get pipe, the ergonomics of that for a lot of the cases would replace the need for me to subclass. You know, I could just keep using the original classes that then when I want to add this functionality It's just a free function, not directly on the Prototype.

JHX: Yeah, I think that there are two problems in the subclassing. One is adding some new features. This could be solved by call-this or extensions, but there is still another problem that Sometimes you'll want to inherit the methods but modify some part of it—actually species is such a case. And in that case They cannot help that it could help some parts, but cannot solve it perfectly

KG: Yeah, I agree. There's this distinction between trying to add new features and trying to modify the existing behavior. My feeling is that trying to modify the existing behavior by subclassing is just a lost cause. The fact that someone can just call Set.prototype.add - and if we get "call this" syntax or something like that, it becomes even easier - it means that that's not going to respect your modifications. So, whatever change in behavior you were hoping your subclass grants, it actually doesn't have that, because someone can just bypass your attempt to change the built in behavior. So my feeling is that if you want to change the behavior, you are kind of - like already regardless of what affordances we make you are kind of forced to create a wrapper and delegate to something you hold internally and then you get to actually enforce things.

JHX: In most cases I think it could work. But again this species case they're hard to use this wrapper or possibly some... I guess the species case is it is a much like the customize a very small part of their method and the best way I can imagine is provide some method factory which could generate the new method.

KG: Yeah, or just say that the language can't help you and you should just provide your own implementation if you want to have that sort of custom behavior.

KG: I see the queue is empty. So, we didn't hope to come to hard conclusions on everything, but it does sound like there is a feeling, especially from implementations, that there’s not much desire to support this sort of minimal core thing where we try to save its methods, call each other on at least on this. And that MM is very strongly opposed to the design which refers to internal slots on other because it would make it impossible to have a proxy for a set, or a membrane, that would actually behave like a set because if you pass it to another set’s intersection method, it would just throw. So I am leaning towards trying to bring specifically Set methods that use the internal slot on this and use this sort of set-like protocol on their arguments, and maybe as ACE was saying maybe not using the set-like protocol directly, but instead using Iteration, I'm not sure. I think I'm still leaning towards doing the set-like protocol. And that probably leads to other questions, such as not respecting symbol.species on new set methods. Obviously, we can fight about all of these things in the course of any such proposal. But that is the sentiment I am getting right now. And if anyone feels strongly that they would like to go a different direction, it would be helpful to talk about that now.

MF: Barring anybody speaking up about that. I feel like we possibly unblocked at least the Set methods proposal for the most part to make progress on making some choices here, and I think that I'd like to recommend a next step here, for KG and I to go back and put together some much more concrete design philosophies that we can spell out in an upcoming presentation, where we could actually ask for consensus on these kinds of things rather than just getting a feel.

KG: Well, I don't know that I want to try to come up with hard and fast rules because I can think about what makes sense for Set, but as PFC already mentioned, the thing that I said about using the internal slots on this is not how Temporal works, and Temporal is like kind of a different thing than Set, so I really don't want to try to come up with a design philosophy that covers both of those cases.

MF: Yeah. I just thought we could do as much as we thought was reasonable. Like guidelines.

KG: I'm hoping to just instead bring set methods and then those can be precedent.

KM: Is it possible to use the set’s [[data]]? And then if the thing is not a set-like if it's if it's a special one than to go the other way

KG: Yes, it is. I should mention there is a kind of a funny consequence of this, which is that if you are trying to write a subclass this doesn't get you basically anything at all, because your subclass has the internal slots by definition and therefore this implementation is going to reach into the internal slot on your instance and bypass whatever - like if you had attempted to override the iteration protocol.

KM: right, right, they basically have to proxy if you want a special behavior.

KG: Yeah. So looking at the internal slot on the receiver I don't think buys you anything. Looking at the internal slot on the argument I think is kind of a more interesting case. MM has previously said, he'd be okay with that because with the fallback, if you fall back to treating it as a whatever-like when the internal slot is absent, then it would still work If you pass like a proxy for a set, that would fail the internal spot check, but it would just invoke all of the methods and they do all the right things. Right?

KM: I was coming in from a performance perspective, in the performance half, right? You would just have your fast path. That's like are both things sets. Do you my special magic thing, does whatever hacks that the internal engine can do to like look at the internal data structures of how it's stored. And then if it's the other thing, fall back to the slow path, which I think is kind of what SYG was talking about before.

KG: So actually, I guess I have a question for engines. How much better is that than this sort of thing where you would you eagerly get the methods, or some subset of the methods, whichever subset is relevant to your implementation, and then like the spec algorithm would say we call those methods that we eagerly get but an implementation could say, oh I see this is the built-in set prototype methods. Like, now that I have done these two user observable calls to get them, I see this is the built-in ones and I'm just going to use the internal slots from now on. How much difference does that end up making?

KM: It makes a big difference for at least for us in C++ code in actual like for our internal code, that's written in JavaScript. It would matter less. It can't be helpful. But typically once you get to the optimizing compiler will just like assume the property access is constant, right? And we'll just have verification, like we'll just have something like someone ever changes it will just throw away code. But in C++, you like, you can't just throw away your code, it’s compiled to binary. So having it be prefetch can be helpful. Also, it's somewhat better than in theory for The Interpreter because you don't have to do the property access in the body of the loop. Things like that.

KM: Yeah, if you have the property access in the body of the loop, it can help. It's worse for like an interpreter, but it probably doesn't matter once you get to an optimizing compiler, unless you're actively changing it, but I don't think people do that, really. That's kind of strange.

SYG: And the yet the fast path equivalent the way you laid out. Is just checking for whether internal slot exists or a property. Look up is some known internal thing, right? That's basically the difference that you later.

KG: Yes, so my question is between the two designs of either, a) Just always eagerly look up the methods and invoke them versus b) first check to see if there is an internal slot as on the left here. And in the case that the internal slot is there, don't do any lookups and otherwise do the eager lookups exactly as in the previous case. So the only difference is you have to do the lookups and then check that those are built-ins and then you can fall back to the fast path versus just doing the path directly. Like is there much difference between those two?

KM: I mean, you're going to do the bracket check, no matter what equivalent of that, whatever the internal is for us because you need to check that You're calling it with the right receiver Anyway. But practically no, as long as you prefetch, it is probably about the same, I would guess. I mean it's maybe a little bit worse, but I don't think it's gonna matter even though you're going to do this loop over all the elements anyway,

MM: I think what Kevin is suggesting about sampling the methods and then not fetching them Inside The loop is a brilliant option that never occurred to me. I love it. The question I have for implementers is: The consequence of not Fetching them looking at up. Each time is as I think both KM and SYG have mentioned is that you're just using the normal fast path logic, that all these engines have any way where, if the method changes, then you bail out of the optimized JIT compiled path and you fall back to having interpret until you have a new fast path. So I understand that. My question is how much overhead. Is there in running a loop that's compiled so that you can bail on any iteration versus having the fast path Be a loop that doesn't have to check on bailing because I believe it's the case that it since the, since what instance, you're accessing is constant, whether it has internal slots of the right kind is constant if you pre sample The methods they’re Not going to change out from under you having done all of that? I think that the inner loop would not need to. You could be JIT compiled without the need for a conditional enabling it to bail, but I don't know if that's significant performance possible.

SYG: Before that question, I want to clarify KG’s question about the two implementation techniques The same what KM said is the correct thing, which is the checking for the caching, the method is just strictly more work like that if they're probably about the same because it's not a lot more work, but that's strictly more work than just checking the internal slot. It's not enough that you just got past the right prototype function, right? You still Have gotten past the actual representation of the object that you want.

MM: So you just need to check both them, but you don't have to check it per iteration of the loop, but that wasn't the that's correct,

SYG: but that wasn't the the comparison that Kevin set up, it wasn't that you check per iteration of the loop. It was either you check only the internal slot, which is what I understood to A to B and B to be not checked internal slot. internal slot. But instead. Okay, cash.

KG: You would definitely have to be checking the internal slot either way, so this thing I have on screen is therefore strictly more work. Yes. My question is like, does it feel like that's going matter in practice? These extra two property laptops?

SYG: I doubt it.

KM: I mean you could make a micro Benchmark. That's just like, run it on an empty set over and over and over and over again, 10,000 times and then probably show a difference. But no, yeah, that's not going to matter much.

MM: So so the reason why this issue of whether bailing the conditional that says, we have to bail because something is method changed, whether that adds significant overhead in these, optimize jit implementations, the reason why that matters for this question is that if the overhead of bailing is not significant for the times when you don't actually bail, then we can just specify this the normal way without mentioning the internal slot on the argument and then it’s just up to the fast path to notice that it can Use the internal slot if it's there and if all of the other conditions are satisfied, but it’s better bailing immediately on what it's doing with the internal slot If a method changes—that's one option. The other option is if the first option has significant overhead than the second option KG suggested, which is you just specify it again without mentioning, the internal slot just specify that the methods are pre sampled and then reused and then an engine optimizing for using the internal slot. If it's there and if the methods are the expected built-in methods now, it can do it without needing to bail and it's still not observably different than what was specified. That did not mention the internal slot.

KM: So the the baling one where you bailed in the middle, there's two things, right? There's one, there's two different measures, right one. There's also memory overhead right. So at least for the way we've done those kinds of things in the past, we do in middle. We have a thing called a watch point. That says like if some property on an object changes, like notify, some object, that tells you all the listeners about the thing, which means that every single version of this function or whatever and every single realm has a thing that says like “notify me” such as uses more memory to there's a memory overhead there in terms of actual throughput sort of a set up once and then go bailing out in the middle. I mean you have an extra branch in the middle of your Loop. So that's a little I mean possibly could be worse, but it probably doesn't matter. It's also just more work to implement. So it generally and then for jit code. It's kind of the same thing versus C++ code. But generally it is easier to have it prefetch the function and like it does make the implementation a fair bit simpler. It's not impossible, but it's definitely nice for, you know, they can this second case. I don't know how often this going to be like run once code. It's less likely I guess. But a lot of other things like for like language features like in the actual syntax, it matters a lot more that you prefetch these types of things like 44 out, we change that. That was like a thing.

KG: Yeah, we've moved in that direction in a couple of places. So I think prefetching is pretty nice.

MM: So the conclusion I want to that, I think the conclusion that I've reached from this discussion, which I'd like to propose is that we don't just do this for Set but, you know, Set was presented. Here is an example of a precedent for other practices. I think doing the Protocol-based other but where the methods that constitute the protocol are prefetched is a very nice option for only using protocol on others [arguments]. Because it means that it's highly optimizable in case for the built case it's worth optimizing, it's less work to do that optimization. And of course you can always start out not doing that optimization just using the protocol and others, but still prefetching it because that's observable.

KM: Correct. I mean like, every engine has sort of a semantic of like get a thing and then the second step of calling it and those are sort of independent as far as I'm aware. So that type of thing is really easy to do, But when you start having to be like, verify some condition every iteration of the loop in a fast way, that doesn't doesn't really require explicit runtime checking It's a little bit harder.

MM: So yeah, so just to be very clear. So, you're in agreement, that pre sampling, the doing it per-Call based rather than internal slots based in the specification, but have the specification pre sample, the methods that constitute the protocol. You're in agreement That's a good pattern.

KM: I think that's fine. Yeah, I think that, I mean, yeah, that seems fine to me. Yeah, it seems like that has a nice intersection of override ability with performance trade-off.

SYG: Depends on what that depends on, on what the pre sampled method does if we adhere to a design, philosophy were Dupree sampled Lots of not method itself, only use the term. delegate to other internal methods, That's fine.

KM: Yes. That's true. Yeah, assuming that the internal method doesn't select other. Yeah, you know if you see if it's transitive and it's going to be gross.

KG: Yeah, so that's a very strong reason to not do that look up on this because it like it breaks the whole edifice - doing it on other is fine because the further lookups that those do will be on their receiver rather than on an argument, so you only have one layer. Sorry. I don't know if that's clear at all.

MM: Can I try to restate to see if I got it? If we did have some supplemental methods, some non core methods that called other methods. If an algorithm like this that operates on an argument was specified to call a supplemental method, then the optimization benefit of pre-caching would be gone. So one way to do that to deal with that is to just adopt the further constraint on writing algorithms that operate on other arguments, Is that those algorithms are specified in terms of the protocol of other should only be the part of the protocol that for the built-in are the core methods rather than the supplemental methods?

KG: Yes, although an easier thing to do is just to say, you always only consult the internal slot on this in every method.

MM: Agreed. I'm not a big advocate of the supplemental methods, but this decision doesn't require there to be none, It just requires these other algorithms not to use the supplemental methods.

SYG: [on performance] Okay, we've been talking about performance kind of interchangeably between different levels of performance and in production JS VMs. I think for language features there are at least two Salient levels of performance. One is the kind of initially shipped ‘dont be stupid’, slow, kind of performance. And then the other is once we reach the optimizing tier, the kind of Kind of really optimized code that can be generated by a full compiler pipeline, deep inlining, and stuff like that. And for language features, I'm more concerned for adoption for the goal of adopting a broad - for the goal of performance, not being an impediment to adoption of new language features and new methods and new classes and so on. I think I care more about the first level of don't be stupid and slow kind of performance. For, you know, these optimization techniques that we just talked about for prefetching and caching it and then being able for the optimizer to kind of make a decision. I think that helps not just the jit level but also the first level of performance in that it not the case at least in V8 that only the jit has fast and slow paths. Our regular runtime functions for the built-ins also have fast and slow paths. There are just different copies of the built-ins for say arrays that if you got passed in an array-like versus an actual array, or furthermore an actual array with, you know, a homogeneous array with only ints or a homogeneous array with only doubles like those have their own separate fast paths in just the built-ins without ever reaching the JIT. And stopping the proliferation of having so many of those would be nice and I don't really know how we we reach that goal, but I would like to that when we discuss stuff like this in the future, where we do care about performance in, in the, in the light of extensions that we don't over index too much on, like what? Here's what a super smart compiler could do. And is that sufficient? Does that makes sense. It's not a super coherent concern to air.

KG: Yes, we will try to ensure that things are optimizable without very much work, or rather that you can do something reasonable without having to be very clever about it. Is that the thing you want to take away or is there something more?

SYG: No. Yeah, that sounds good.

KG: All right. Well, we are basically at time. We have another minute or so if anyone else has something to say, but I think this was a productive discussion at least for me and I am hoping that MF and I can come back with either a concrete proposal for set methods following this discussion, or something to talk about principles that we could write down more broadly if we think we can do that. I'm really excited to have a set that supports union.

RPR: Thank you. you. KG and MF. all right, then, so that concludes the morning session or the morning, according to the schedule, maybe afternoon, or evening, depending on where you are in the world. So for this afternoon, JSC has kindly agreed to give up a little bit of time so that so that CHG can bring back decorators over a few more more small issues. So we're starting off with JSC on the dataflow expiration. And it's decorators and then finally types of comments. So, for now, please do make your way to Hubs 3D world where everyone chats. And thanks to ACE and RGN for taking notes this morning. We will resume in exactly one hour.

Holistic discussion of TC39 dataflow proposals

Presenter: J.S.Choi (JSC)

JSC: All right. Good morning afternoon everyone. I’m JSC. I'm with a. I'm here with a return of an item that we talked about last plenary, which is trying to view five, proposals that overlap each other holistic way, strategically. So we've had two meetings, a plenary meeting in a post. Plenary meeting. This is a Redux of that. It's Open-ended discussion. I'd like to quickly go through this article. I linked it in matrix, but I can link it again right now. On Matrix. I'm going to through this article fairly quickly to give as much time as possible to discussions since since I gave up some time. So we'll look. Here we go. So, you know like last time like at the last plenary. Dr. Miller mentioned that you know, the proposal process has pathology and that it emphasizes new individual proposals and it kind of D emphasizes their cross-cutting concerns how they overlap and such. So we it's tough to get a unified language unless we actively fight against that. So this is kind of an effort to try to view this space holistically to try to do the sort of holistic strategizing that was happening pre es6 before. So we've got we've got like five proposals here. You might remember this diagram. There have been some changes particular, bind-this has become call-this. The bind-this operator down here, used to you. That used to support also function binding. It's an infix operator that its left hand side is some object and the right-hand side is some function. and so it tries to it, tries to change the receiver of that function. And it used to support creating bound functions. In addition to calling that bound function with arguments. Now, it's basically just a version of dot call. We got rid of function binding. So it actually is functionally a subset of extensions, extensions being this proposal that touches on a bunch of things.

JSC: So also, although I've kept this part of the diagram within the pipe operator, based on some conversations I had with some of the representatives. I'm starting to consider the call-this part. It's not really overlapping with pipe in so far that the pipe version of .call expressions is really clunky and unreadable and I'll talk about that. But to zoom in once again, "call this" has dropped creating bound functions, in that it currently only supports immediately calling functions with different this arguments. And, and although I'm keeping this section within pipe, operator. I mean, I crossed it outreally is. it out here. I don't think that receiver owner dot owner dot method dot call with topic arguments is really that much of an improvement. So it improves the word order, but it's very clunky. Other than basically everything else is the same. And I'll go over the results of the ad hoc meeting afterwards, since a lot of the plenary wasn't there, at the end of this article.

JSC: Im not really going to review the proposals themselves. You can read the explainers. And it also has a summary in the article itself. They overlap in different ways. I'm going to really focus more, broadly, and high level on the differences on how they approach paradigms. You can see I added color coding to this diagram here and I'm going to consistently use that color coding in the rest of the article. Whereas red indicates APIs that don't use the, ‘this’ binding and like, quote functional unquote, APIs and blue indicates, APIs that use that ‘this’ binding like, quote object-oriented, unquote APIs, and I include duck doc, call in this. so,

JSC: Just to really to go broadly on what the point of all this is. I'm going to use a term called data flow that it's in the title of this thing. It's basically the idea of that you transform some raw data. It can be really any value with a series of sequence of steps and they can be function calls, method calls, whatever. And these five proposals try to improve them in different ways in order to make them more natural, more readable, more fluent. More linear more ,more fluent. And so I give this example here that we have in the status quo, already a flow away to create fluent data flow using a prototype based method chains. And here we have a nice linear data flow kitchen, get fridge, find with predicate count, to string. And so can, you can see the numbers at the top there. In contrast, if you use you use any other kind of a API, in particular function calls that don't use this. You get a zig zagging, you get deeply nested expressions that result in a zig, zagging by reading order. You can just follow the numbers up here. So we have a term called fluent interfaces, there's even a Wikipedia article about it and it's right now only available in the status quo with prototype based chains. So the idea behind something like the pipe operator is to try to make it so that we can express these fluent dataflows with other sorts of API. It's not just prototype based method chains where the method already belongs to the Prototype. Now, reading order isn't the only factor in data flow fluency. There's also excessive boilerplate, and by boilerplate I mean visual noise. I mean stuff that isn't essential to understanding the meaning of essential stuff and just gets in the way. Hypothetically for instance, The pipe operator wouldn't improve that original prototype based method chain, if you try to use the pipe operator on each of these, it's a lot of wordier. it's worse. To give another example, you can see that dot call is very common in object oriented code you, if you follow that link, you can see our dataset and our results, you can reproduce it yourselves. And so if we try to improve find, this thing with .call call using the pipe operator, it arguably gets worse, which is why it is struck out. Find dot call pipeline dot call topic predicate is arguably, even even worse than using find dot call up here, though the word order gets improved. Just excessive boilerplate with the stock call topic, whatever so that, that's why a separate operator may be here to, to reduce the excessive boilerplate. And as a reminder, .call is a very common operation in the function in the language, the pipe Champion group has really been thinking about whether it's possible to modify the pipe operator to address.calls, clunkiness without compromising the pipe cases and the pipe operator, that we really haven't figured out any way. Other than other than making what's essentially a new operator.

JSC: Anyway, there's one more thing with regards to clunkiness is and that's we can express data flows as series to Temporary variables, which is totally fine, but excessive use of temporary variables is pretty clunky too, and can introduce a lot of redundant visual noise, too. There's a reason why there's a reason why the Prototype based method chaining was so popular with fluent interfaces. So we get into, what I would argue is an ongoing and current ecosystem Schism in the realm of data flow, and that's between object-oriented APIs that use ‘this’ based functions and functional, APIs, that don't fit that. Use non-displaced functions and you can further split both of these up into several quotes. Sub. paradigms unquote like Using functions from prototype. Chains versus ‘free this’ using functions and for the functional Paradigm stuff, depending on the variety of the function. like whether they're curried unary functions and n-arry functions, whatever. And whether the inputs of interest that we're doing the data flow through are zero arguments or last arguments. And so these paradigms have different trade-offs, but right now developers have to choose between APIs and interoperability isn't that good. These trade-offs fall under two major factors. Dataflow fluency, which I mentioned earlier and module splitting. Module splitting is a very powerful force. In today's ecosystem, due to the ongoing drive to improve performance. I'll talk about that in a little bit, but basically, right now fluent data flow is only possible in the status quo with prototype based chain of prototype-based object-oriented floats. It's not supported with free object-oriented functions. It's not possible with non this using function calls, but module splitting is possible with the other 10. So it's developers have to choose between having fluent data flow or module splitting. This interoperability gets into a virality problem. And again, this is ongoing right now in the status quo, the By virality, I mean something that WH mentioned in the previous ad hoc meeting, which I thought was a great framework, which is that when you have two different syntaxes that can do the same thing, more than one way to do it, interoperability determines how viral one choice becomes over the other. It is so like if if it's easier to work with the on syntax with that same syntax then, That that's going to keep on, that's going to encourage feet of new APIs to work only Only with that syntax another, not the other syntax, and I would include tool chains in this too. Things, like, like, webpack roll up, stuff that does tree, shaking and module bundling and splitting today. Module bundling, and splitting is an extremely powerful force in the ecosystem, and it's only possible really with free functions. It used to be such that data flow fluency was like, maybe the primary drive and that's why we saw many object-oriented APIs that were based on Prototype chaining think, jQuery, Think mocha. Think even the DOM but like code payload weight has become so powerful of a force with esbuild, roll up, and webpack and That major a The eyes are switching from prototype-based OO paradigms to functional paradigms of only because of the treeshake-ability. A major example is Firebase firebases, JavaScript SDK. They were trying to have it both ways by having modularity, by monkey-patching methods into prototypes, and allowing people to use prototype based method chains only if they imported certain modules, but they gave up on those side effecting Imports and just switched over whole sale to functional based stuff, giving on dataflow fluency.

JSC: So this is the sort of current viral ecosystem Schism that I'm talking about. There's a gradual towards this of the shift towards only because of tool chains work better with functional with a functional APIs, that they're switching over to that, but they're giving up data flow too. The interoperability between these two paradigms is poor. There are some similar Schism between those sub paradigms times I mentioned earlier, I won't get into that. You could read about that. There's a, there's an overall strategic question of whether we're okay with this virality with this paradigm shifts. JavaScript has always had more than one way to do it. But if those ways have trade-offs clear, trade-offs and nowadays, Major Tool chains are only working with one way. Are we okay with that? We can argue that if constraints are a source of creativity and thinking hard about the problems, then maybe it's better to have constraint have constraints within and have people think within this one preferred Paradigm. Like, you know, I would argue that JavaScript is a multi-paradigm life, being a multi-paradigm language and its Core Design already combining a bunch of bunch of paradigms including its core Library, which does use an object-oriented principles. I would say that. if only to work with Legacy code and the core API people will continue to need interoperating. The with object-oriented APIs and really they should, I would argue, they also should be able to keep creating new object-oriented APIs without giving up the module splitting or tree shaking. That's such a powerful pressure on the ecosystem today. But at the same time, people who prefer prefer working in non this based functional API, should be able perform fluent data flow and mix it with object-oriented data flows.

JSC: So with regards to that. How do we Bridge the Schism? I would argue that in order to bridge the Schism. We need to bring forward all just enough. Just enough new features that allow people using either APIs using either Paradigm to fruit freely mix, mix the APIs using the other Paradigm, influent data flow. And ideally, you allowing them to use tree, shakeable functions, to those are the two big preface to Big pressures on the ecosystem and though, and the trade-off right now, forces people to choose between one and the other. So here's a this table here shows various ways to mix and match proposals. The status quo is at the top. There's object-oriented Paradigm is on the left, and functional paradigms is on the right. Um, there are different ways to combine like, for instance, the pipe operator covering making functional calls more fluent and the call this operator making.call fluent and also allowing tree shaking that there are different ways to mix and match that. so like I said, I wouldn't call this,

JSC: So, you know, there are different ways we can mix and match them. It brings up the question of like, should we only bring in one proposal is to is two the right amount even three like for instance in this row down here, this request. This would require bringing in three proposals. Function.pipe doesn't really work on these two columns in the Paradigm without a partial function application syntax and then extensions down here. It kind of, it has some funny things with when it comes to how it handles. Non this using functions. I'll get that into that in a little bit, but really, there are different ways of mixing if you want to mix. And if you want to bring in only one, then then which are submitted for Pipe operator and not the call instance, if we only brought in this operator. Again. I argued earlier that the pipe operator doesn't improve duck calls, which means that it doesn't improve data flow for free this using functions, which means it doesn't improve the tree shaking, which means that it means, and it means that object oriented this pressures on object-oriented Paradigm. Ecosystem, will will not be improved. So, and I get into that this just talks about more about why arguing that call this in the pipe operator are complementary and do not overlap - that the pipe operator only really handles the functional Paradigm. It does not improve object-oriented Paradigm at all. I'm going to skip through that and and this rest is just basically what I mentioned to again, extension syntax is both object-oriented and functional paradigms. The functional Paradigm, is addressed in kind of a special scenario where it's ternary form, where a function called has to belong to an owner object, and the owner object, can't be a Constructor and the function called the input of Interest that's being flowed through has to be the zeroth argument. It's a very specific scenario.

JSC: There is the concern that even if we're having an ongoing ecosystem Schism that ratifying a number of these proposals may worsen the schism for instance, if we ratify call this alone, so the risk of the Schism going back to WH's, framework earlier, depends on Paradigm interoperability, and also the balance between the pressures on API designers. And so, for instance, if we only did call-this, and we didn't do pipe, then it would encourage developers to use free this using functions, which would be tree shakeable, but it wouldn't improve that data flow fluency for functional APIs and non ‘this’ using APIs. And so there the interoperability isn't improved in data flow fluency, which means that even though it be improved with ecosystem tools, tree shakers and stuff for It warranted APIs. So it may worsen it may well worsen the Schism we're seeing; developers will be torn between or will be pushed towards object-oriented data flows and to give up functional data flows because functional data flows wouldn't have that fluency. Likewise the pipe operator alone. If we do the pipe operator alone, it would improve functional data flow fluency wouldn't improve interoperability with object-oriented data flows with tools like tree shakers today, because it wouldn’t improve call's clunkiness. So using the pipe operator alone may worsen the ecosystem Schism and accelerate transitioning of APIs to Functional APIs, preferentially because that would continue to be the only fluent way to actually have tree shakeable functions. if we did both "call this" and pipe operator, it is possible that the pressures on the ecosystem between the two paradigms would equalize and you will have fluent interoperability between call using functions, as well as regular prototype based methods, and also with non this-using functional APIs, to having both tree Shake ability and also with and data flow fluid. On both sides of the spectrum.

JSC: Similarly, to the pipe operator with there's also partial-function-application proposal (PFA) - with function that pipe, and we only did that obviously, it would accelerate transitioning to functional data flows to that. If hypothetically, we did that extensions, it's tough to predict, but it probably would It probably would, it may worsen it alone may worsen a Schism and that it would prevent interoperability with functional APIs, that do not use the 0 for argument as their as their main data flow input. And it would also it would demand that API writers for functional APIs Not use Constructors as owner objects and that they may not be able to do. use free, non this-using functions. Either extensions, has some meta programming stuff that hypothetically could solve all this using runtime dispatch. I'll briefly touch on that, but I'm not going to really get into that because I don't - I mean, I think that it probably would be much less performant efficient, but I'll touch on that right now.

JSC: So, untime cost. There's one last way to divide these proposals and that's whether they're “zero-cost” abstraction, Abstractions with no memory or time overhead during runtime, or if they involve the run time allocation of objects or dynamic runtime, type dispatch. pipe and call-this would be abstractions whereas function that pipe, if you use it with partial function application it requires callback construction, which is I believe one reason why implementers were concerned about PFA syntax. Likewise extensions involves Dynamic type dispatch. It's kind of its kind of complicated, but in particular, yeah, it has a has both a binary and ternary operator and the ternary operator depends on whether it's middle operand is a constructor or not at runtime and depending on that it uses .call on a prototype method or to seems that it's a static non this using function. It also has a runtime metaprogramming system based on a well-known symbol, that affects the behavior of both the binary and ternary operators. So it hypothetically, it may be a little more difficult to statically analyze; certainly parser wouldn't be able to know you will have, you would have to do some type analysis of that stuff.

JSC: The last part of this article has to do with the appendix and the, a lot of this is drawn, mostly from the ad hoc post plenary meeting. Although there's one drawn from the prior meeting and there's to the two issues that statements that I'd like to draw attention to In particular. There's one representative that has a hard requirement that the pipe operator be bundled together with call-this or else that representative would block the pipe operator advance to stage one or two one. think like five years ago, or so, conditionally on that a call This like operator would advance that can always double bind operator, but whatever. So there's that issue. But at the same time, another representative is, is reasonably reluctant to have more than one syntactic dataflow proposals to advance. And that representative is most positive about the pipe operator. And so reconciling these two things. It's a big issue, especially with regards to feature at the pipe operator, which is hard coupled to call this or something like that, and then there's some other findings too that we had from the ad hoc post plenary meeting. There were like six representatives there. We discussed whether overlap is intrinsically bad, and as a little bit undesirable. It's okay to have some but to match is bad. TMTOWTDI “There's more than one way to do it” continues to be a core part of the language, but we have to just keep on looking at it her a situation because we don't want to have too much tend to agree. There's more than one way to do it in general extensions and call this or still mutually exclusive. Although you could say that call, this is future compatible with

JSC: extensions does continue to polarize the committee. I believe that the representative of the the champion for extensions plans to give an update later, to plenary, and also partial function application syntax also continues to polarize the committee. and that's about it with regards to my update on data flow. Hopefully this framework is somewhat useful with regards to like ecosystem Schism, I'm arguing that there's an ongoing Schism right now to different pressures and an imbalance between data flows with the tree shake ability, and that developers are forced to choose between one and the other. But right now, tree shake ability is winning, which is why major APIs like Firebase are transitioning to that and giving up on object oriented APIs, but interoperability between the two remains poor in data flow expressions anyway.

JSC: Anyways, thank you for that long spiel. Hopefully, this analysis is useful. I'm going to hand it over to the queue. And this is an open-ended discussion.

JHX: Okay. Thank you. I just want to give a very small explanation about extensions. There are some differences between extensions and "call-this", but in the scope of data flow, the difference is not essential. So in many respects extensions are same as called this, because semantically, call-this is a superset of extension and the the extra part of the extension actually is trying to deal with some functional style. And think JSC already mentioned that. So, I think that the important part actually is the OO style and the functional style. So this is my comments about that. Thank you.

MM: Yeah, so first of all, to frame the discussion I want to respond to the statement about JHD’s position. The term "block" or "I will block" has a connotation of being actively obstructatory, and I want to make sure that that we're not reading that in, and I think the right framing of it is that consensus needs to be earned and being clear about what consensus has not been earned is a fine clarifying statement to make and helps inform the discussion. So I will do that as well.

MM: I think the language is already way over its syntax budget. A lot of the concepts here are things that I would consider if we were starting from a tiny language, or if we were designing a language greenfield. That's not the situation; the situation is we're talking about adding syntax to a language that already has so much syntax. It's a real burden on understanding what code means, especially for novices and we need to remember the special role that JavaScript has in the world. A lot of people learn programming first, not in school, but by looking at JavaScript and trying to learn from other people's code. There's just lots of sort of amateur people who pick up JavaScript that are not planning to be professional programmers. Their expertise is elsewhere, but they want to use the language. And even though they can stick to a subset they understand, if they're learning from reading other people's code, the more they have to understand before they can even start to have to decipher other people's code the worse off they are. So, my conclusion for this, that one outcome of this which I would be happy with is that we adopt zero of these proposals. I would be happy with that. I don't think any of these proposals clearly adds more value to the language than it subtracts. The one proposal that from the arguments I've heard I would be comfortable seeing go forward, or I would not withhold consensus from it if it seemed to have by itself enough momentum to go forward, which is just the pipe operator, the pipe operator itself and the topic, which I presume is @, those two operators alone with the current semantics that they have, which can be understood straightforwardly as a rewrite. "call-this", the more you JSC explained it in terms of multiple paradigms, the more that I thought the conclusion was a stronger case against "call-this". So let me explain that because I'm confused about your argument. JavaScript has these two co-existing paradigms, object-oriented and functional; the pipe operator lets you get the notational benefit of infix for calling functions, but it does it without looking like a method lookup, but it does it by still making it clear that you're calling a function. I think the right way to think about what the in the language in which these paradigms coexist, the right way to think about what the difference between the Add-ons are, what the fundamental difference is, is, who decides what code gets run as a result of executing the call at the call site. In a function call, the function is looked up according to the scope at the call site, according to the value that the function expression evaluates to, which is typically an identifier looked up in the local scope. It's very much much according to the call site. That object oriented program is we call early binding. The functional paradigm is fundamentally about late binding, where what code gets run is according to the object, the object is the authority, and that introduction of new objects, with new implementations of the same API, are supposed to be able to extend the meaning of the call site and that says that the existing “.name” is the realization of the object-oriented paradigm. To put it another way, the object-oriented Paradigm, you're programming in methods, that mention "this", but an unbound method should never be reified. Having a first-class unbound method that is "this"-sensitive is outside the object paradigm and outside the functional paradigm, and should be discouraged. Part of what we're doing as a committee is also making choices that help encourage good patterns and discourage bad patterns. The object-oriented patterns that make sense is no reified unbound methods, the functional pattern that makes sense is all of the arguments are the explicitly provided arguments, and those functions should not be “this”-sensitive. And given that those are the only patterns for these two paradigms we want to encourage, the pipe operators still adds substantial value. The call this operator purely subtracts value.

KG: Yeah, I just wanted to— So the things that MM said first, that the language has so much syntax already and it is not enough that I think would be useful. That is not— the bar is much higher than "this would be useful for something people want to do". It has to be so useful that it's worth trying to cram into this like rather full language. And second that yes, the notion of functions that are not attached to an object and still refer to their this is very strange and not something I would like to encourage. Even if it is a thing which already exists, again, it is not enough that it would be useful for a thing people want to do. It has to also be a good idea for them to do it, and I don't think that functions which refer to their "this" but are not actually associated with any object are a good idea.

JRL: So I have two points that are a little bit related, both in response to MM. First, is that he says that neither operator pipe, nor call-this, add to the language in a way that is more than them subtracting from the language by taking syntax, and I think that's false. The ability to change and get rid of class-based APIs in a way that is ergonomic to the call site and to the user of the API, makes both of these considerably better. And we can debate whether or not "call-this"'s this based usage is learnable to new developers, but I think that just the fact that we are switching from method dispatch on a class to a function invocation, makes this a complete change to the way that we currently write JavaScript. And that brings it into the second point. Whereas most code that is written today uses classes either explicitly or using the prototype on functions, almost all of the standard language is written in a class-based API. Almost all user code is written as class-based APIs. We understand object-oriented methods extremely well, but unfortunately there's an extreme penalty when using prototype methods and that it's very difficult for us to eliminate dead code from our bundles. The reason Firebase is so noteworthy here is because they're prioritizing bundle size so much that they have, for their users, broken the expected usability and ergonomics. You have a function that takes the context object as its first argument, which is almost unheard of in JavaScript. No one does this. If you have a class that has 5, 10, methods on it, that's totally normal. But if you have five or ten free functions that are taking their context objects, their fridge or their kitchen or whatever, as the first parameter, that breaks with expected ergonomics that everyone has accepted. The reason “call-this” is a good addition to the language is because it allows us to achieve both good bundle sizes, that we need to in order to have decent web performance, and an acceptable call site useability, so that libraries are actually encouraged to try these new APIs.

WH: I agree with MM, especially with the point about the different kinds of dispatch. In procedural programming you’re just calling a function. In object-oriented programming, it’s the object that determines what a method means. “Call-this” is a violation of that. It's an anti-pattern, it is not something that should be encouraged. The other worry I have is about ecosystem schisms: we must remember that functions sometimes take more than one argument, so it’s best to treat them equally. I don't want wars about which argument is special that the extensions proposal would encourage by privileging arguments in the first position.

JHD: Yeah, so and I agree with what JRL was saying and I disagree that the call-this operator is only subtractive. I think that fixing the word order alone is enough value for me. I also have other values, you know, other needs for it that the majority of the community probably doesn't respect, but that's fine. I think that we're talking a lot here in a sort of parental way, kind of about patterns and anti-patterns and sort of like what we want to encourage and discourage. This is the language we have; there's lots of ways people already use it; and we are allowed of course to make editorial decisions about, you know, opinionated decisions about what things we want to encourage or discourage and what code people should write. We shouldn't just codify everything people do in the language necessarily, but I think there's there's specific value to be had here. Also, just to be clear, I don't want this entire suite of five proposals to land. I'm not in favor of the group of them, which is why I always get concerned when they are grouped together in these discussions. I think that a couple of them are very valuable, and useful and important. And that that would be sufficient. But obviously we are all going to have different opinions about which subset of these proposals we want to land, including “none of them”. I just think there's value here, and we should not just ignore it because we've just arbitrarily decided we have enough syntax.

YSV: Yeah, I just want to support quite a bit of what MM was saying. I think my feelings are similar. One of the things that really made me worry about the pipe operator was that the scope of syntax that it can be used for is very broad. And in addition, it removes intermediary variables and it is difficult to search for what it's doing. The other issue is, you know, you have shifts between types, so the surface area for learning the language becomes quite a bit larger, just with the pipeline. I am, as I've said before, I'm not going to block it. These are just my concerns. I had some support for what was the bind operator before, and which is now becoming the call operator. My concerns there are that there is a common pattern in web programming, this doesn't extend to the entirety of the ecosystem, there's a common of the ecosystem where you have to bind the same object to itself. So you have objects.method.bind(object), and then the argument that it would take because of what the call site is. This is confusing for learners and difficult to explain because we have to explain this to them as well as everything else around how JavaScript functions work and JavaScript methods. So, and that's particularly problematic when we, when they're doing something, like, iterating over an array, or something like this, any kind of built-in methods using this. So, that was my concern with bind. That was a problem that I wanted to solve. I think that call-this is doing a lot more than that. I don't think, believe it's much more common in the node ecosystem, and I'm not the right person to judge whether or not it's the right solution to those problems. I don't know, but I am concerned, because The smaller that the language is the easier it is for a person to come to it and understand it. So I do think that being sensitive to adding a lot of syntax that is difficult to Google, and is over also overridden - like we've got the conflict between @, we have multiple things using the tilde, and there are a lot the things that are happening in JavaScript. We've got a lot of syntax already. So Those are my concerns. Again, not blocking any of this, but I would not be happy to see all of this go into the language. I think we really need to be careful. And I think we need to be careful about how much magic we add to the language. Like, sure. you can write your code in a very aesthetic way for yourself, but not everyone is going to write JavaScript in the same way and are these tools equally applicable to all people? I think that not all of these are. I am getting convinced that pipe maybe.

USA: Great, thank you before we return to the queue. We have a little over five minutes. So request people to be quick.

JHX: Yeah, as the author of the extension proposal. I have to say, actually I agree with both sides. The reason I designed the extension proposal, I know that many people may some disagreement about some design part of the second proposal, like why he needs declarations, why does it need a separate namespace? Actually, it's because the problem MM said for once I think the causes this, this functionality the very important but on the other side I also agree that spreading unbound will have many Problems. So, the extensions method has some special design to try to control the about that part of the Unbound methods. And in most cases it will, you can not use Unbound methods in extension because it's a declaration and you can't use the unbound method references. So yeah, that's it.

SYG: This is more of a clarifying question, it might be a pedantic quibble. I don't quite understand what interoperability means in the context of this presentation. Like, obviously you can do both together in the same program. What does interoperability or rather what does the lack of interoperability mean?

JSC: I'll be quick since we've got a little bit of time and then I'll try to sum up everything. by interoperability, I'm talking. about whether different paradigms can be mixed in the same linear fluent data flow, which is not currently possible with like functional calls, you know, it causes of a zig-zagging in reading order is cetera and all it. Sorry. And there's also the tool chain stuff, which I saw you talking about that Matrix earlier. So right now it's a lot easier tree shaking easier to do trees. shaking and module splitting with Paradigm functional, but not the other object oriented programming. Having said that, if we're not considering free-this-using functions as object oriented as the object-oriented paradigm, like MM has said, then, arguably, it's impossible to have up the object-oriented paradigm accommodate this ecosystem pressure and interoperate with tree shaking. So, that, that sort of a, that's sort of another can of worms, but it's mostly the data flow, whether they can fit in the same fluent, linear data flow.

SYG: Thank you. I would strongly prefer to not refer to the parity of toolchain optimization, I would not refer to that problem as interoperability

JSC: noted.

JHX: Yeah, I will be very short. I think I agree with JSC that that the essential problem here is that if you compare the old style and the functional style, that the real problem here, is that OO could cover most cases of functional style because with this style, it's trivial to write a helper method like pipe. Actually, this is what other programming languages, which have extension methods do, for example, Kotlin. The piping is the power of pipe extent Master. Is that the name is let, you write, you can have any value and, or lat and you can have a block and for example, in console.log(it); it is just like the topic reference. But on the other side, the functional style pipeline, it can't work well with OO, it's very very impossible and there is the point that current hack style, it does now work well with classic FP.

JSC: Okay. Thank you. I have to cut everything short; the queue is empty anyway. In the end, the big thing is that it seems like there's polarization with regards to whether free this using functions should be encouraged or not. Now I had done a corpus analysis that I think is pretty robust in the past saying that using free this using functions is already common in spite of the clunkiness of .call, and they don't fit in data flow and pipe doesn't solve that well. Now bearing that mind, there's a representative who will not let pipe Advance, unless there is a call this. And then we have other saying that that they will only only let pipe Advance if anything at all and nothing else. With that in mind we've got a, we've got a stalemate or a deadlock here that we're going to have to keep on revisiting and the end result might be we have nothing, which might be acceptable to some people. But I'll keep on trying to revisit this and encouraging a holistic, strategic strategizing and hopefully, trying to overcome the pathologies of the proposal process.

JHX: [from queue] Syntax cost is important. This is why we need a much more powerful proposal that could solve many syntax problems at one time.

Conclusion/Resolution

  • no specific conclusion

Minor decorator tweaks, part 2

Presenter: Firstname Lastname (CHG)

CHG: So a few more issues were brought up for decorators that basically are mostly kind of similar to the Private Fields discussion. So the first one is, should we allow the this keyword in the decorator without parentheses, without the Escape? Similarly to private fields There's no real - there's nothing non-obvious about this. It is already possible if you wrap it parens, so this should be ok. Just wanted to confirm with the committee. Are there, is there any reason we would not want to allow this?

[silence]

JHD: So I think that it should be the same as a member expression that if want @ and a member expression to be there, then anything that looks like one should also work. So, super, arguments, new.target, import.meta. If it looks like a member expression, they should all work there if any of them do.

KG: Yes, regarding "looks like a member expression". Like, we're not allowing computed property access. For example.

JHD: Yeah, so if so, that's fair, right? And so I think you're right. Let me qualify. Because we're not allowing computed property access. I assume users will look at that and say, “Okay, cool. So static things are cool and dynamic things aren't, and that's fine”. this and super and all those are all static things. And so I feel like they should work the same as a non computed member expression. Is that more precise?

KG: Well, I guess I don't know what static and dynamic mean in this context. But I agree with you that this and super and all that should work.

JHX: I know that this in syntax is okay, but I really feel this is confusing and I really don't like to allow this. I think it's in practice no one will write code like that. So it might be disallowed. That will be a bad choice.

CHG: So I agree that in practice nobody will this. I also think that is true of private fields, and I also think that's true of super and new. I think, like JHD said, this is kind of just like the design decision. Here would be that things that look Static. Member Expressions that look static are allowed here and only only things that look Dynamic are It's, and I think that it is, if that's the mental model that people intuit, it will be confusing if they attempt to use this for whatever reason, and they find that it's it's a syntax error.

JHX: I think we already found many limitations on this place. So why not disallow it?

CHG: I mean, I would argue that if that were the case, then we should disallow private Fields as. Because for similar reasons, private Fields, don't generally make sense here.

KG: Private fields make a lot of sense like you could just be in a context in which a private Fields means a thing. You can be in a nested class, and referring to its private Fields totally reasonable. I agree that this is a little more suspicious, but private fields in particular are just like - they're totally reasonable.

TAB: [from queue] Agree with JHD; accidents of grammar that don't have actual problems backing them shouldn't prevent things.

WH: I just wanted to bring up a procedural issue, which is that we have multiple proposals trying to grab the @ symbol. It would be confusing if they all got it. So I'm just wondering how we're going to resolve this.

CHG: yeah, that is definitely an issue to discuss. I'm not sure that that is relevant exactly. If you're worried about it being. [audio-issue] I'm just saying like I'm not sure how that's relevant to this particular issue. Unless you're concerned about how @ is being used in one of those other proposals.

WH: I'm asking a procedural question of how we are going to resolve this. I don’t want to discuss the technical issues now, but in case it matters, my preference is that decorators should get @ and other proposals should not.

CHG: We only have 20 minutes and we have a few of these items to get through. So I'm just not sure that this is the right time to address that because that could be a larger conversation.

WH: I'm not asking to address it right now, I'm just asking how are we going to address it. Given the history, I'm going to assume that decorators get the @ symbol. The other proposals would have to have to pick something else.

JRL: Earlier this meeting. We agreed that the pipeline operator could also use the @ symbol. Are you saying that can no longer be the case?

WH: No, that's way too confusing. And they will conflict in various contexts.

JHX: Disallow it, we could allow it in the future if we really want to. And if we allow it right now, we cannot remove it anymore.

CHG: Like I said, I think this is extremely edge-case-y so it doesn't sound like we have consensus to change it. So I'm happy to move on to the next one. The next one being super.dec. I'm guessing for similar reasons people will object to this one.

[audio-issues]

KG: I feel strongly that super should be allowed. It's totally reasonable. Like this is weird because there's the question of, does it refer to the class while the class isn't finished being constructed yet. So like, how does that work? Is it something from outside the class? That's also wrong, but super - like the super class already exists and you can just refer to it.

SYG: I want to interject. I don't understand what the previous conclusion was. Are we disallowing un-parenthesized this

(?) Yes, that was the previous conclusion.

JHD: Yes, it would have to be allowed parenthesized, since it's an expression.

JHX: Yeah, yeah. Yeah. I just want to say generally, I think we should we should all add keyword.decorator If possible

KG: What's wrong with super?

JHX: Guessing. Yeah, I mean disallow this, super or any keyword

KG: Why super? Super is totally reasonable.

JHX: Sorry. I don't think so. Is there any real benefits to allowing it?

KG: Yeah, you can - think it is entirely reasonable that you might have a decorator, which is defined as a static method on the superclass and you might want to invoke it on a subclass. That's like a totally natural thing to want.

JHX: Sorry, I don't get it.

KG: like, if I have a super class and the super class has a static method named ‘bind’, which is a decorator like it expects to be invoked as a decorator, then as my subclass I might very reasonably say @super.bind as my decorator on a field expecting that to invoke the method in a superclass as a decorator. That's like a thing that I would very naturally want to do.

JHX: okay, I think if you could add some code example, in the jitsi it would it will help you to understand it. Thank you

CHG: Okay, cool, so I think we can come back to this item, possibly in the next plenary. Then after those examples have been added, does that sound good?

USA: Yeah, sure.

CHG: That's the last one. of these was the new keyword new.target and import keyword import.meta So yeah, JHX. How do you feel about these two same position?

JHX: I hope we wish we could ban all of those

JHD: Okay, so So, I just jumped on the queue, I mean, import.meta.whatever. I mean, the whole point of that space was for hosts to put anything they want, what if they want to put a decorator there? We're going to make it unergonomic for hosts to do that. Is that a restriction hosts are welcoming?

JHX: It's possible, but they could, they could write the variable a, local, local binding to solve problem. I don't think this is seamed. It seems that they are edge cases.

JHD: Sure, but it seems like something that it would be better for a linter to block then for the language.

JHX: Okay, I uh, I think I need some time to think about that. Generally thinking I do not like to leave all those things to linter. But yeah, I understand. This is the current practice.

SYG: I don't quite get the argument for wanting to ban all of them. It's because you don't think people will write that code despite delegates saying they will write that code?

JHX: Yeah, so sorry. I said it seems apartment. He is here that I think in most cases people use decorator as very special thing. So, I understand to allow these expressions sounds ok, but I feel these are very strange usage and it would not match most real world usage.

SYG: I think that's a fine opinion to have. I don't think that really rises to the level of an argument for why we should not include an otherwise unsurprising bit of compositionality, right? Like as a language, most things should probably just compose even if the compositionality might be rare, except when that composed thing is like super problematic for some reason. And then we argue about why we should disallow certain things to be composed. In this case, I haven't heard any argument on why it's harmful to keep allowing them to be composed other than that it is unlikely. And I don't think that is really an argument at all for why they should not compose.

JHX: Okay. Yeah, I think it makes sense. Yeah. Maybe meta property is okay. I'm not sure.

USA: Yeah. It seems like something that could be discussed offline.

CHG: Yeah, I think we can continue to iterate on this question and bring it back up at the next meeting. I will take it as an item to continue discussion here. So there's one last thing. I know we're at almost that time, but I'd like to get through real quick. I thought I had moved this part of the spec forward in the most recent draft, but I hadn't, it turned out. So I just wanted to see if this is still acceptable. So this was I believe where we landed on the strictness of decorators and the modification I believe is right here. It's essentially that all parts of the class declaration and expression are ‘strict mode’ except for a decorator list at the top level of a class declaration or class expression, so the decorator list applied to the class itself, which is contained in strict mode code there. There's a long Thread about this if anybody wants to read it (issue #204), but this seems to be where we landed. Is anybody objecting to pulling this forward into the current spec?

MM: Yes, I object in the absence. I did not follow the thread. I was not aware of a thread. So let me say in the absence of understanding. What Authority is our objective. We sure I'm getting (?) is correct. Without this addition, the decorator list is interpreted as strict code. And with the addition that you're asking about, the decorator list would be interpreted as sloppy mode, is that correct?

CHG: ah, I mean, I'm not super sure about this strict parts of the spec. I'm not super familiar with them the strictness parts, so I would to kind of look at it. But I think the current spec says all parts of the class declaration and the decorator list would be considered part of the class declaration or class expression.

MM: So I need to understand what the normative difference is. Of Accepting or not accepting this language. I don't know what the current behavior is that this language would change and I don't understand how that language would change.

CHG: So what I understand is that it would allow you to use - like the code for decorators outside of the class body itself, so specifically the decorators applied to the class itself, would not be considered in strict mode. A reason for this was because of a look-ahead concern, so there was concern that we would have to like, you know, parse everything in order to understand, if this module was going to be strict or not.

MM: Okay. Thanks. Mentioning that concern makes a huge difference. I understand that sometimes these look ahead concerns can be very painful on people who implement parsers. Were there implementers on that thread that voice a strong desire to avoid the implementation complexity and held by this look-ahead. And and if not, are there any implementers in the room that would like to venture an opinion as to whether the look-ahead in question here would be painful.

SYG: Well it’s arbitrary, right? If it's like an expression that's a decorator application expression that's currently in a sloppy thing. Well, I guess right now is there another context in which you could apply a decorator in a sloppy context?

KG: No, since decorators are part of classes, but this would amount to a decision that there never could be one, or that if we had one that it would entail the sort of look ahead which we have previously ruled out.

SYG: Exactly. So it comes down to yes, exactly that. So if we want to preclude right now that we don't have non-class decorators, then the rule is just all decorators are strict all the time.

CHG: I think function decorators are the most agreed upon and most desired possible feature. Good place for decorators. I'm not sure I have not dug into them deeply, but I definitely think it would be good to not preclude the ability to add function decorators in the future.

SYG: In any case. I'm not comfortable making the decision right now, with the time remaining. Meaning, whether we should preclude.

CHG: Yeah, sounds good.

MM: Agreed, I don't want to make a decision right now.

CHG: Okay, so we will take it as an item to continue digging in here and figure out what's going on with the strictness.

Conclusion/Resolution

  • No conclusions
  • JHX to think more about import.meta etc
  • CHG to think more about strictness

Types as comments (continuation)

Presenter: Daniel Rosenwasser (DRR)

DRR: Thank you very much for making more time for this discussion. We heard some of the feedback from the first discussion plenary. So first off if you happen to see a grammar dot MD or some sort of grammar file. Sorry, we didn't mean to mislead you if you felt like that was the concrete syntax that we are proposing. That was more of an iteration point, those sorts of details can be discussed more in a later stage. So, sorry if that was a misleading point on top of that, some other feedback that we heard at the first presentation was, you know, a desired for type Very much in line type system neutrality. I think that's this. with what we had in mind for this proposal. So the sort of line of thinking for what we're trying to open up is something like pluggable types, if you're familiar with that concept enough space where you could bring, whatever type Checker and apply that to, you know, this space that we're trying to carve out. So existing type Checkers, future ones, what have you. Another piece of feedback that we got was a question of, “Hey, is this trying to describe something for an existing syntax. And is it? Some of that exists in syntax to represent all of that existing syntax?” It definitely takes inspiration from existing type system syntaxes, but we're not trying to get all of it. We do believe that there is value in just getting some of it at this point and it should be a decent chunk of that as well. Right? So that, we're not leaving existing users behind, but we also believe that there is room for the existing type systems to grow and sort of adapt and find ways to bridge the gap, so it's not just something where we're creating new variant that doesn't really satisfy existing users. But again, this is something that we can discuss at a later stage and we would appreciate the chance to do that, too.

DRR: But I think the biggest thing that we got feedback on was that we needed a more concrete problem statement here at plenary. So the problem statement that we've put together is that there is a strong demand for ergonomic type annotation syntax that has led to forks of JavaScript with custom syntax. This has introduced developer friction and means, widely used JavaScript forks have trouble coordinating with TC39 and often must risk syntax conflicts. Now, I want to come back to this slide in a minute or two. So can observe it, but we'll come back to it.

DRR: I want to give some background on some of the thinking here. So. in a sense, you know, if we go back to why type checkers have needed to do this work or create these forks. They've all needed some expressive space. Something that's gone beyond what the language currently provides, you know, as stands in EcmaScript. And so each of them have added a set of extra syntax and you can do that these variants of JavaScript syntax are just like forks. Now, these forks are not necessarily a bad thing. This is something where we've discussed forks in JavaScript, you know, years ago actually, when we came to TC39 back in 2014, and there was no explicit, you know, direction Where anyone said that this was a bad thing. In fact, these Forks have been a really good test bed of whether or not there's value in static types in JavaScript without actually, adding anything like that to JavaScript until this point and they succeeded in proving that out a significant portion of JavaScript users now, use types and in fact, within committee here, good number of delegates come from organizations where a lot type JavaScript is used. So, the current status quo like, how things are today, is that these type JavaScript variants can continue to grow kind of independently, right? They innovate in the type-space and within TC39, we kind of collaborate, you know, we're providing feedback or creating jobs for specific proposals, and we're sometimes bringing, you know, thoughts back from the type systems, but you know, it's maybe only in bits and pieces, right? And that, you know, the current status quo, has a lot of benefits in that we have independent evolution. So, you know, we can just do things that work best for the type systems. But these forks still do add friction, some of which we've already discussed the other day. So, for one they're not directly executable. For another, that configuration overhead of just I think setup is pretty intimidating, which is a stark contrast to what a lot of JavaScript users are used to write, a lot of people view it as a very approachable language. And so getting to that next step of you know, getting typed JavaScript can be tough and we have anecdotal evidence of people saying, well, I'm just going to use the embedded comment syntax instead of direct, you know, types syntax on top of JavaScript because I don't want to deal with configuration or build step or whatever, But then you know, like configuration. a lot of the time we're kind of doing guesswork in how we add features to the language when it comes to type variants. We have to figure out, you know, what is going to be the best way of adding a feature that's not going to kind of conflict with the future proposal. So what were interested in is whether or not we can integrate the wins of these typed variants, right? We want to provide a syntax carve out to support pluggable type Checkers and we believe that exists in type-checkers can Bridge the gaps of what we're proposing and what we would be able to provide.

DRR: So the proposal here would be to formalize an ergonomics syntax space for comments to integrate the needs of type-checking forks of ecmascript. We want to Workshop that a little bit. That's something we can do maybe more in stage 1, but we believe that the problem statement is sufficiently motivating to move into stage one. And so we'd like to open the floor to the discussion to understand whether or not we're meeting the committee's expectations here, if there's anything that needs to be clarified, things like that. So I'll open the floor to questions.

MM: So first of all, let me say that when you presented types as comments in plenary, earlier, I had a conflict and I missed the whole discussion. So my apologies for that. So this might be a little bit out of context, what I understood about the presentation was that the grammar that was presented was large, and I don't know if you believe that you can satisfy the problem statement with a tiny grammatical edition, but I would certainly very skeptical of any large grammatical addition to the language. I think stage one is fine. I have no problem signing on to let this go to stage one with this problem statement. Thanks for a clear problem statement. And that's it.

DRR: Thank you. I think that we definitely understand that feedback and at least having the opportunity to discuss this within stage 1 would be something that we and I think many typed JavaScript users would appreciate at the very least that would provide us a good venue to understand how to best serve users or understand the problem space. So that's I think the minimum of what we're hoping for but so yeah, yeah, well, I think can proceed in the queue.

WH: Yes, the issue I’m having is that the problem statement seems to be finely crafted to advance just one solution, which is to integrate TypeScript syntax into ecmascript, while excluding other viable approaches to solving the problem. The problem area is specifying type annotations in JavaScript. I would not want to restrict it to type annotation syntax which has no semantics. I would not want to exclude things which are much simpler than what TypeScript supports. But the problem statement as written seems designed to exclude those options. And that is not something we should be doing in stage 1.

DRR: so maybe we can, we can Workshop through, we can discuss some of those points in specific. So I think the first one that I want to talk about specifically, is that you mentioned syntax without semantics. So I don't want to be too pedantic. I think there's an understanding that the semantics that we're hoping for with this syntax is one where it does not affect the execution of the surrounding context, right? So when you encounter that syntax, you simply do nothing. And is that the specific point that you're taking issue with?

WH: So you've already adopted a solution of types-with-no-semantics. That's what I'm hearing. So you've already decided what the solution should be.

DRR: I mean, if so if we broaden the problem statement so that we can discuss the other possibilities, there. Is that something that would reasonable. Like like would that would that allow us to discuss that more in stage one, right if there

WH: Yeah, for Stage 1, we should not limit the discussion to a solution that was presented.

DRR: I think we're willing to discuss that. Nothing is off the table in that capacity.

WH: Well, I'm not getting much assurance that other options are on the table.

DRR: I mean, this is sort of - within TC39, discussing what approaches have merit seems to be the thing where we want to get out of this, right? So if we want to discuss whether or not this syntax should have any sort of executable semantics. That's fine for stage 1. I have my current point of view. I'm willing to discuss it with the committee and hopefully we can come to an understanding of why I believe that. But if you do not, I mean would you want to have that discussion in further detail?

WH: Yes.

DRR: I think that that's perfectly fine. So maybe we're in agreement on that.

WH: Yes, the other worry is that this is just a backhanded way of — rather than adding a type annotation syntax, that's this is over-constrained on adding the TypeScript type annotation syntax into ECMAScript. Just echoing MM's point, there is much simpler syntax which we could use for type annotations. The ideal situation would be the type annotation syntax having the same syntax as value expressions. It could encode a lot of types with that. You could handle almost all of the common use cases with just that. You wouldn't get some of the more esoteric TypeScript extensions, but this would be a very simple minimal JavaScript way of doing things. So I want to know if that's something that the presenters would be willing to consider as in-scope.

YSV: I'm going to interrupt here just to make sure that we keep the discussion going in a productive way. And there have been a few comments in the chat where it is common for proposals to present a potential solution and we are getting in a little bit deep into psychoanalysis by saying that, what the intention of the author is or is not if we can pull that back. And just focus on what can be done here. in this case, I think that this is more appropriate because otherwise we're applying a certain level of scrutiny to this specific proposal. That isn't applied to others.

DRR: I really appreciate that because I feel like I'm having a hard time describing that I just I just want to be able to discuss the proposal in further detail with the committee, We have ideas here and we want to develop them in further detail whether that syntax or semantics,

WH: I'm asking about the problem space. In the past the committee has made it clear that the problem space is not just what the one proposal describes, unless it's something very trivial. So frankly, some of the discussion I see in the chat is getting personal and I do not appreciate it. Some people have been attacking me on chat throughout this meeting, and this is getting to be a Code of Conduct issue.

DRR: I don't know what's going on in the chat.

WH: I don't see that right now.

YSV: Yeah, I think I'll take it here. So I believe you are referring to the comment that spoke about you by name, it does. The issue though...

WH: And a few days ago.

YSV: Okay. I'll keep the focus on what's been happening today. It was referring to your questioning of DRR’s intentions with this proposal. So let's keep that back as well. And that's this is the wrong way of putting it and somebody's intentions. I'm just asking, what's what's compromises. They would be willing. The consider your Austin.

RPR: What are we open to?

WH: Yeah, I just want to know what type annotation solutions are we open to? Questioning somebody's intentions is frankly hostile. And that's not something that people should be doing.

RPR: Aright, so we know you are not questioning our intentions. You are asking questions. I think what we've heard very clearly is that you would like the space to be open to include type annotation syntax that also has semantics. That's very clear.

WH: Yeah, and I would like this space to be open to very simple syntaxes. So, I'm absolutely open to hear how others feel about that.

RPR: I think we would always be open to the simplest solution that solves the problem.

WH: Okay. So what is the problem area: is the problem area just "annotating types" or is the problem area "annotating types in a way that matches the TypeScript syntax".

RPR: So let's not say TypeScript specifically. We can say that a significant number or percentage of the JavaScript population would think of themselves as JavaScript programmers even though they are using these forks of JavaScript in order to get the benefit of static types. And so we know all the problems of language forks. I think everyone here - in particular in this committee can appreciate that. We don't want the majority of people departing JavaScript and primarily using one of these. It would be great if we could reconcile the ecosystem. I would reference SYG’s wonderful comments on the universality of JS from Tuesday's Types-as-Comments session as evidence there. So that's the starting problem statement that we've got, and if to satisfy your curiosity you're wanting to open up to more, to also assess whether we should have run time semantics that seems fine.

WH: Okay

YSV: Does that address your concern?

WH: Okay. So are we open to simpler syntaxes? I hear that you've said yes, we're open to runtime semantics. Am I correct that we're also open to exploring very simple syntaxes that do not try to match what other existing forks do?

DRR: I think that we're going to take those things into account, but yeah, I'm not against trying to understand what the use cases are and whether or not there are benefits and merits for one approach versus the other. I'm happy to have that discussion. I just think that it's something we can entirely do at stage one and I would not like, if we were to have that discussion, we would never say, "sorry, this is just out of scope, go away". I think that would be unreasonable, and that's something where we need to come together. When you talk about a more minimal syntax as well, I think that that is something that we need to figure out in stage 1 too right? How much is useful versus how much is going to be - how much of it is minimal enough to be useful, right? So that's something we'd have to talk about in more detail going forward.

WH: Because the reason I'm asking is because the problem is I can't tell from the problem statement whether a minimal syntax that's only 70% instead of 80% compatible with TypeScript, would be within scope or not. Could I have your assurance that we will consider any kind of syntax and not just ones which match the forks? That would ameliorate some of my concerns.

DRR: The answer I have to that is we don't entirely know either. Part of the discussion there would be "what is the most interesting and useful stuff that we want to incorporate here?" What can be dropped, what can be inspired by, what is good here? And so we don't entirely know. We've done sort of a first-pass attempt but not gotten enough feedback from folks here to understand that.

WH: Yeah. I'm not asking what the solution is. I'm asking what the stage one scope is.

YSV: We have several folks on the Queue, and we've been on a similar topic for a while. Can we have a quick conclusion to this? And then we move through the six items that are currently on the queue? So, Daniel from your side, that would be just confirming Waldemar's concern that you'll address it or not?

DRR: Everything WH mentioned is in scope.

CDA: There's been some discussion about this proposal internally at IBM. We do share some of the concerns that WH is bringing up. There are varying opinions on the current content of the proposal, but we do all agree that it is worth exploring the problem area further, and that is specifically around expressing type information. We do think that's worthwhile, exploring that space. Again, some reservations about the proposal and the coupling to TypeScript. But if advancement to stage one is more about exploring the problem area, rather than signing off on the proposal as currently written, then we support it.

MF: okay, it seems like we have a more overarching goal of narrowing the gap between JavaScript and TypeScript or other syntax forks. And it seems like it is unlikely that the committee will end up opening some huge syntax space basically as we saw in the original proposal earlier where we just allow basically everything that is allowed by TS or some other fork. It's more likely that we see a narrow space, though an infinite one, added for these kinds of forks to make use of. Now my question to the TS team is, if such a narrow space was added, would TypeScript be willing to actually move toward using that syntax space that gets reserved, or is really the only option for JavaScript to become TypeScript?

DRR: I think that we are open to continue to bridge - like doing whatever we can to bridge the gap here, as long as it seems reasonable and good for users. Sowe already tried our best to understand types in JS if we need to figure out some adaptation, I've mentioned this. There's ways to bridge the gap there, and so we're willing. Yeah, that's the short story.

MF: Okay, that's good enough for me.

JHX: Thank you. My understanding is that the big goal should be narrowing the gap between JS and TS. My observation is that the ecosystem is already JS and TS - they're there. They are our whole thing. They're very close and most people write JavaScript is the most developed the, I know, they even they just write JavaScript not absolutely, but they still use many things from TypeScript, like the types, the d.ts files and many other things. So there are two things I think is important. The first one is for the toolchain developers today. There are many issues about the toolchain that they may have some prominent issues that JavaScript and the types will be. There are some subtle differences here and there, some, there are some issues which they need to think about. So, I think this is a big problem. And the other problem is for the average programmers. Recently I was asked a question about something about optional chaining. And I found that many programmers just think that optional chaining is a feature of TypeScript because TypeScript has the null assertion. It looks very close. That causes many confusion in the community: which part is TypeScript which part is JavaScript. So I would be in support of the big goal that we should try to solve these gaps. So even if we do not add types as comments, I would like for the committee to reserve some syntax space for such usage, which can help to solve the problems I mentioned.

RPR: Thank you for the fall-back suggestion.

Francisco Tolmasky: So I guess I'm coming from perhaps the like opposite end of the big discussion that just took place, in that like it feels like this proposal relies a lot on the spirit of the proposal. And by that, I mean like, as far as I can tell, And I'm happy if this doesn't expand to any sort of semantics. It doesn't seem like proposal would actually enforce anything to do with types at all. Including like, this could be used to annotate anything. And, you know, we don't call them Financial operators. We call them arithmetic operators, right? Even if originally for the historical context of why plus and minus were added to the first programming language might have been because people wanted to do stuff with dollars and cents, But like it's certainly the case that when you read about these syntax elements, they're not described in terms of money handling, right? And similarly, you know, normal comments aren't described as like, you know, documentation generation tools, right? They're syntax elements that can be used for just about anything. It's very wide open - people have used it for other stuff, right? Like oh you move stuff left and right with the plus operator too, even if the original intent was something else. And so I think that at least some of the tension from this is that this proposal exists in this kind of liminal space: on the one hand, we talk about it as a thing that's super neutral, right? Like so neutral that likely even admit that like, yeah, I guess you don't even have to use it for types. On the other hand, I think some of the expectations of what will get out of it seemed very tight, right? Like if the idea is that like, oh every single editor will be able to expect at least these comments to be around. There's kind of this implicit understanding that like they're they're going to syntax highlight it correctly, but they can't because like the only correct syntax highlighting of a neutral comment would be just straight up normal comment color, right. Anything further would mean that, you know, the editor plugin is assuming that this is probably TypeScript type at this position, even though it's technically this neutral thing, right? So, I feel that perhaps both to prepare ourselves for that, I think people will use it for wacky reasons. It might be completely outside our expectations. And also to limit the kind of fear of: is this TypeScript or isn't it? If we were to instead have the problem statement be something closer to like, we understand that, you know, a part historically of the JavaScript Community has been to add language on top of the language and we'd like a place for any of that to live so that hopefully we can avoid this in the future for any any sort of thing. Whether it's some sort of new kind of decorator or whether it's types, or whether it's, you know, something that you could even use it for documentation, right? Then I think, you know, we would avoid these questions of like, well should it have types semantics? And if not is in a confusing that you can add types, but they don't, you know, throw an error and it's a try to locate it. I think it does a better job of putting that squarely as a, you know, kind of like an implementation detail of what you're doing on top of it in the same way that like, we don't really discuss the issues with JSDoc inside of comments, you know, when we discuss the syntactic elements of Comments.

RPR: Yeah, I guess that is somewhere where we might land. Certainly we've had some of those discussions over the last couple of days. I think it came up on Tuesday, whether this problem does reduce to “just comments”. It sounds like based on some of the feedback we've had today we need to consider also going a bit beyond that, but definitely I think in stage 1, I would hope to resolve that and it may land just as you say.

WMS: [from queue] +1

EAO: One part of the stated problem statement here that I think is new compared to what was presented the day before is the last part there, which states that this has introduced developer friction and means widely use forks have trouble coordinating with TC39 and must risk syntax conflicts. As we're going to take a new person to TC39, but I find this surprising and I thought I'd ask, although like other avenues of coordination that you're currently going with and then presuming that, you are, are they failing somehow that you're looking now to to open this new spec aspect of this to work with? And furthermore, if we go forward with this are you going to expand on this version of the problem statement in the The GitHub repo, which I don't think he really talks about this really like at all?

DRR: I don't want to spend too much time on this sort of meta question today. I would really like to see if we can bring this to stage one and discuss, and then have a better discussion in the future if that ends up being necessary. I do acknowledge that may be a topic that is discussed in the future, though. So I appreciate you bringing it up.

EAO: So would you be updating the text the GitHub repo to reflect this, which is currently not there at all? As a specific goal. Because right now, the goals that the GitHub repos presenting about this are rather different.

DRR: I don't think that we're aiming to work on processes.

RPR: Definitely I think in light of bringing this to plenary. And what we've heard about the framing of the problem, the way that people in here would like to hear our goals expressed. I think we've learned and will go back and update the repo certainly. And EAO this particular issue of learning these troubles and what processes can help, I'm definitely interested in speaking with you offline about that, and if it's worth getting that into the proposal, then by all means.

JHX: [from queue] Even without this proposal, most proposals already be affected by TS syntax (such as first-class protocol)

DRR: Okay, it looks like we don't have any items on the queue. So, with that, can we ask for stage one, please? Are there any objections to stage one? With the problem statement shown and with the additions made during this discussion?

MM: I support stage 1, I will be very skeptical as this goes forward. But yes, I think this is a good stage one investigation.

YSV: Do we have any explicit statements of support for this proposal to go to Stage 1?

WH: I support stage one with the clarified/expanded scope.

JHD: I also support stage 1 with the clarified problem statement, but I would encourage the champions to be very, very clear about what this stage 1 acceptance does not imply. Because a lot of people are going to take inferences from it.

Can somebody just clearly, what is the expanded scope or problem statement? Is that the one that was flashed up in the slides or has there been

RPR: The one on the slides and we will also consider runtime effects and simpler syntaxes.

KG: And we'll consider changes that also require changes to TypeScript. That it's not just JS is necessarily going to try to move in the direction of the forks, but rather that we might try to find some common ground that both places can move towards and meet there.

WH: Yeah, the clarification that was important to me was that much simpler syntaxes are also in scope and any compatibility with any fork is not a prerequisite.

YSV: Is that clear for everyone?

DRR: I think so.

LCA: +1

MWS: +1

JHX: +1

BN: Apollo supports stage 1.

CDA: IBM supports stage 1.

YSV: And the queue is empty, last call for any objections. [silence] Congratulations. Folks, you have stage 1.

Conclusion/Resolution

  • Stage 1