From 061af9a0fed200d91ac3eb5efa987d478ce76742 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aki=20=F0=9F=8C=B9?= Date: Fri, 6 Feb 2026 23:26:25 +0100 Subject: [PATCH] 2026 January transcript --- meetings/2026-01/january-20.md | 1201 +++++++++++++++++++++++++++++++ meetings/2026-01/january-21.md | 1237 ++++++++++++++++++++++++++++++++ 2 files changed, 2438 insertions(+) create mode 100644 meetings/2026-01/january-20.md create mode 100644 meetings/2026-01/january-21.md diff --git a/meetings/2026-01/january-20.md b/meetings/2026-01/january-20.md new file mode 100644 index 0000000..8ec7a0a --- /dev/null +++ b/meetings/2026-01/january-20.md @@ -0,0 +1,1201 @@ +# 112th TC39 Meeting + +Day One—20 January 2026 + +**Attendees:** + +| Name | Abbreviation | Organization | +|-------------------|--------------|--------------------| +| Chris de Almeida | CDA | IBM | +| Waldemar Horwat | WH | Invited Expert | +| Duncan MacGregor | DMM | ServiceNow Inc | +| Dmitry Makhnev | DJM | JetBrains | +| Ruben Bridgewater | RBR | Datadog | +| Keith Miller | KM | Apple | +| Ujjwal Sharma | USA | Igalia | +| Ben Allen | BAN | Igalia | +| Nicolò Ribaudo | NRO | Igalia | +| Caio Lima | CLA | Igalia | +| Josh Goldberg | JKG | Invited Expert | +| Shane F Carr | SFC | Google | +| Samina Husain | SHN | Ecma International | +| Steve Hicks | SHS | Google | +| Olivier Flückiger | OFR | Google | +| Linus Groh | LGH | Bloomberg | +| Mikhail Barash | MBH | Univ. of Bergen | +| Philip Chimento | PFC | Igalia | +| Ioanna Dimitriou | IOA | Igalia | +| Chip Morningstar | CM | Consensys | +| Daniel Minor | DLM | Mozilla | +| Aki Braun | AKI | Ecma International | +| Lea Verou | LVU | OpenJS | +| Richard Gibson | RGN | Agoric | +| Jonas Haukenes | JHS | Univ. of Bergen | +| Istvan Sebestyen | IS | Ecma | +| Guy Bedford | GB | Cloudflare | +| Jack Works | JWK | Sujitech | +| Chengzhong Wu | CZW | Bloomberg |, +| Jordan Harband | JHD | Socket |, +| Kevin Gibbons | KG | F5 |, +| Michael Ficarra | MF | F5 |, +| Mark S. Miller | MM | Agoric |, +| Rob Palmer | RPR | Bloomberg | + +## Opening & Welcome + +Presenter: Rob Palmer (RPR) + +RPR: All right, welcome, everyone. There is—it is 10 a.m. in Port-au-Prince, if I’m saying that correctly. Remote heating today on our time zone. I will present these slides. Sorry, I’m just getting the presenter view ready. And there we go. Can you see the slides? + +RPR: Yeah. We are all good. All right. So as has been if the case for the last year, we have the full chair group here, so that’s RPR, USA, Rob, me, Rob and CDA, who are also online, as well as our facilitators, Justin, DLM and Daniel, and let’s continue. Hopefully everyone that’s here has come via the regular entry form and is not just been handed a URL to Google Meet. On a note, please do not share that URL preliminary outside, and that includes the delegates chat, which is very public to read. The delegates chat is also logged as well, so please no private URLs in there. And, sorry, here we go. And a reminder to everyone that we have a code of conduct. That is available on the [TC39.es](http://TC39.es) website. Please read it, and please do your best to abide by or to live by the spirit of the document, not just the very specific rules in there, and as I think CDA has summed up, we can describe it as being excellent to each other, if you’ve ever seen the Bill and Ted fill from 1900, highly recommend it. Our schedule for remote meetings is we have a two hour block, polled by a one hour break and followed by another two hour block. As it stands, our schedule is relatively light. It is only a three-day meeting this week, and at the moment, it’s looking like we have 1.5 days worth of content, though that may expand. So, yeah, up to two days. We have already declared we shall not be going into a third day, so please be considerate if you wish to schedule overflow topics. All right, on to our regular comes tools. Hopefully in your on boarding guide from TC39 chairs, will you have seen that we use TCQ. The link to this is in the reflector post, the maybe meeting invite. This is where you’ll see the agenda of upcoming topics. And if you switch to the tab that says queue, from the agenda to queue, you’ll see what we have currently discussing, which is the opening and welcome by me, and on here, you’ll see some buttons, if you wish to interact. This is how we control the conversation and make sure that we speak in an orderly queue, one at a time. So if you want to speak and you are speaking, such as entering a new topic, you’ll see this button, I’m done speaking. Maybe that’s actually been erased now that we are using the new version of TCQ, TCQ reload, but if you see it, please don’t press this button that the Chairs normally advance. This is jumping ahead a bit. This is not—I’m not showing the buttons, but anyway, those buttons at the bottom allow you to enter the queue. Normally you prefer the buttons on the left, that’s the new topic, that’s the normal way of putting something on the queue. If you want to talk about the current one, then the lighter blue. If you want to intervene with higher priority, and as we go right, we get into clarifying questions that can come almost any time, and then point of orders are for the most intrusive, most interruptive interventions such as emergencies or our comm system is failing or perhaps let’s say that we’ve fallen behind with the transcription or something has gone wrong with that. That would be appropriate to stop the meeting and allow us to get that back in place. So the red button is there if you need to use it. We also have our async written comes in the Matrix. IRC-like messaging service or I guess I should probably say Slack-like messaging service. All open source. And the channels that you really want to be on are TC39 delegates for work and Temporal dead zone for the non-work, such as jokes and anything off topic, really. All right. Moving on, we have an IPR policy. Everyone here, in general, should have gone through or been subjected to this policy, either you are part of an ECMA member, so a company or institution that has signed up as a member of ECMA, or perhaps you are here as an invited expert, in which case will you have signed this form as part of process to be on boarded. Anyone else is deemed to be owner. If you’re here as an observer, normally that gets noted on the main invite, but we expect you not to speak or contribute into the meeting itself. Likewise, we collect notes. In fact, very detailed notes. And so just be aware that these will be going public. I shall read out the disclaim that are a detailed transcript of this meeting is being prepared and eventually be posted on GitHub. You may Ed out in at any time for accuracy including deleting comments that you do not wish to appear. And you may Ed out in the first two weeks after 2 meeting and making notes in the repository or contacting TC39 chairs. And we would love you to help with the notes. This is the fun exercise of fixing up the notes, because the majority of the content is there but we need help with attribution and small corrections that can be very humorous. If you wish to have fun with that, we shall take a show of hands now. Who would like to be our first volunteer for helping with the notes? Okay, BAN, straight in there. Thank you, Ben. Could we get one more person to help with the notes? We would love one more person. We’re inviting anyone who could help us just for, you know, if you can only do an hour, that’s fine. We can tag in, tag out. Anyone to help Ben? + +BAN: Just to say, I will need to tag out for the era month code presentation, so we’ll get someone in there then. + +RPR: Understandable. I wonder who could help us out? Let’s see, is there anyone here who has never taken notes before? That would be appropriate. Save the day right now. Okay, so I think we’re getting— + +SFC: I can help for the first hour. + +RPR: Okay, thank you so much, Shane. All right. A reminder that the next meeting, the 113th, is coming up in a couple of months, so that is in March. It’s hosted by Google. Thanks to Justin for arranging this. That’s in the Chelsea market office in New York. And so hopefully see you there. We are also trying to arrange a TG5 workshop on either the Monday or the Friday. That has not been specified yet. And please sign up on the sign-up form for that. It’s on reflector. We have done the ask for notetakers. We have BAN and SFC. I’d like to ask for approval of the previous meeting, so that’s the November meeting. Are there any objections to approving the notes from that meeting? Nothing on the queue. That is approved. And for the agenda, we have forthcoming, so for this upcoming—the meeting we’re in now, are there any objections to the agenda? No comments on the agenda, so we consider that adopted. And so next up is, is it SHN? Are you presenting the secretary’s report? + +SHN: Yes, Rob. Could I please share my screen? + +RPR: I will tell you when we see your screen. + +SHN: Okay. Just give me a moment. + +RPR: Something’s coming up. Yeah. We can see your entire desk top. + +SHN: Can you see the slide show? + +RPR: Yes. It would be better if you were—yeah. Now it’s full screen. Looks great. + +## Secretary's Report + +Presenter: Samina Husain (SHN) + +* [slides](https://github.com/tc39/agendas/blob/main/2026/tc39-2026-003.pdf) + +SHN: Thank you so much. Okay, happy new year to everybody, and hope you’ve had a good start to the year. It’s already well into January. + +SHN: Thank you. All right. Just a few things I’ll go over today. We approved a number of standards at the last GA. I just want to run through that. We also have some ideas of a new work item I wanted to bring to everybody’s attention. It’s more informative to give you an idea of what’s coming if you or your organization would like to participate. The new ECMA management. I’ve listed the TC39 chairs and editors. You can tell me if I was correct in my information, and of course the approval for the next June GA. First off, you see right there on the bottom on the page, if you’re attending FOSDEM in Brussels next week, we will be having an event to toast TC54 and the second edition of CycloneDX. You see the details there. Please register if you can. If you are already there, that would be great. It will be after the AboutCode workshop who are working on PURL. December GA, which just passed a month ago, we approved 13 standards, and some technical reports. It’s been a busy time for a number of TCs. I just want to highlight those for your awareness. Quite a few on the AI side, which is very good for ECMA. It’s the first ones we’ve done, and we will continue with more. you can access all these standards as per usual on our website. I also want to highlight that at the last GA, we approved the HLSL new TC. We talked about for many months, perhaps almost a year, and finally happened. We’re just setting up the final participation and delegates. Some of you may have been reached out by Chris from Microsoft to make you aware of where we’re starting and the meeting schedule. We hope by February, this will start, and Aki will be supporting together with myself the new technical committee as they form. This is also very good for ECMA that we have a new Project. + +SHN: As you are aware from the last meeting in November, we had an error that was found and it has been corrected and I want to highlight you it has been correct on both Editions, 2024 and 25 of both ECMA262 and ECMA 402 with the correct alternative copyright notice. ECMA management for 2026, also in the December GA we voted for new management. I’ve listed our new management and our president is Tess from Apple and Jochen will continue as vice-president and congratulations Google will be our treasurer and then you see the executive committee. What is new on the executive committee is our two ordinary members Ayla from Huawei and Andrew from Bloomberg. The non-ordinary members remain as the year before. It is an active and I think a very good group of members participating. + +For the TC chairs and editors, typically at the start 2026 year, TC39 confirms their chairs and editors. I have listed who I had from the past. If I made an error, I’d appreciate a correction. And it would be great if as a committee TC39 approves the chairs and editors and just confirms and as usual, one year process. + +KG: I think we were going to talk more about that later, Rob. Did you—am I correct about that? + +RPR: Yes, I have on the queue for that topic, yeah. + +SHN: In 2026 approval will be coming up in June. And the GA schedule for June 29, June 30, we did have to make a change for the ExeCom meeting date. It’s moved up by three weeks to the 31st of March and 1st of April, so please take note of that. I had made an error in the slide deck that is loaded on GitHub. It’s probably by the end of this month you need to freeze your two standards. And assuming that you are moving forward with the 17th and 13th edition for approval at the June 29th GA. So just to make note of those dates. + +SHN: And also to point out, June 29th, we will do something to celebrate Ecma, this is 65 years the organization has been active. We were not able to do anything for if 60th anniversary due to COVID, so there is something in planning for June. Probably in Geneva and we will bring it to everyone's attention once we have a bit of information. The Slides Annex has the usual information regarding the next meeting dates, relevant documents, the code of conduct already mentioned and rules on invited experts. If I just go down to the meeting dates, as I noted again, the ExeCom date changed, which is important for the approvals. The GA dates remain as they are. The dates of these meetings are set as everybody knows, and there’s a list of all the documents that have been worked on since the last GA, so if you would like to read anything, please reach out to chairs and we’ll be able to access all of the information. And I think that brings me almost to the end of my discussion. and I will stop sharing.There’s a slide reference, which is added to the agenda tool. It’s called ArkTS. ArkTS is a topic brought to my attention in my last trip to China in November. And they’re very interested in this new work item. It is of course important that any new work item that comes into ECMA has more than one participant. I wanted to talk about it with TC39, but the information I requested from Huawei did not arrive, and I have simply put their slide set, about five or six slides, as reference for every member and every person attending the TC39 to review. If the topic is of interest, I would appreciate your feedback. If there are any concerns on the topic, I’d also appreciate the feedback. Just reach out to myself and AKI via email, and perhaps at the March plenary, we may have more clear information on a scope and program of work, which may be something to talk about from an ECMA perspective. + +SHN: Okay, thank you. That is the end of my presentation. I’m open for any questions. + +RPR: Thank you, Samina. I’m on the queue to—in response to your slide on the chairs and the editors. You’re absolutely correct that we approved these each year and we elect new people as and when. We had intended to do that before this meeting, but that—as the chairs, I did not publish that request on the reflector, so that will be coming out very soon. And we will intend to do that election at the next meeting. That is the March meeting in New York. On that topic, it looks like the existing chairs plan to continue, so there is not a strict need for volunteers there, but people are always welcome to put themselves forward. On the editor side, there is an active need for volunteers. We are looking for 1.5 more editors. So one person as a—like, a full-time expectation, and not full-time, full-time, but a full editor role, whereas one person could be reduced hours. And that will all be made clear in a reflector posting. + +SHN: Okay, great, thank you, Rob. I will then—we can update that slide and hear about it in March. Thank you. + +RPR: Thank you. All right, any other questions or comments for Samina? + +SHN: Okay, thank you very much, Rob. I just want to say, Aki, if you have any comments, please take the time right now while we still have our agenda item. Thank you. + +AKI: I would really appreciate if many of you could take a look at that ArkTS slide deck and just get back to me in, you know, the next two months before the next plenary. I would love to hear feedback on whether or not this is something ECMA should be involved with or—and if so, in what way. Is this something we should try to make a standard or is this a technical report we should be trying to put together. Something like that. Please do take a look, everyone, and let me know your thoughts. + +RPR: And where can people find the link to that slide deck in. + +AKI: On the agenda. + +RPR: On the agenda. I’m looking at it now. Yes, says reference material arc TS. Yeah. Okay, I just put that in the delegates Matrix. + +AKI: Thank you. Also, if you’re going to FOSDEM, we’d like to see you. + +### Speaker's Summary of Key Points + +The Ecma General Assembly in December 2025 approved several updated standards, including ECMA‑74, ECMA‑119, ECMA‑424, and ECMA‑425, and a suite of new specifications, Package‑URL (PURL), Common Lifecycle Enumeration (CLE), Minimum Common Web API, and the multiple Natural Language Interaction Protocol (NLIP) standards and one technical report. + +A new Technical Committee was approved on High‑Level Shading Language (HLSL), chartered under the Royalty‑Free Patent Policy and supported by contributors such as Microsoft, Meta, and Google. + +The corrigendum was issued for ECMA‑262 and ECMA‑402 to update copyright notices. + +The Ecma GA appointed its 2026 leadership, with Theresa O’Connor (Apple) as President and Jochen Friedrich (IBM) as both Vice‑President and ExeCom Chair, and Chris Wilson (Google) as Treasurer. + +TC39’s 2026 chair group and editors to be confirmed, and the approval timeline for the ECMAScript 2026 editions (ECMA‑262 and ECMA‑402) was outlined, targeting GA approval in June 2026 following ExeCom review and mandatory opt‑out and publication periods. + +Additional annex material highlights upcoming GA and ExeCom dates, recent TC39 and GA document releases, and reminders regarding Ecma’s Code of Conduct and the limited, exceptional use of Invited Experts. + +## ECMA-262 Status Updates + +Presenter: Kevin Gibbons (KG) + +* [slides](https://docs.google.com/presentation/d/13yOJtg2RwMN5Ki2GfL6dfIjIqNL1nNn3lRW_yHr9LO0/edit) + +RPR: Thank you so much, Samina and Aki. Next up, we have the ECMA262 status update. And that is with Kevin Gibbons. + +KG: Yes. Give me one moment. Okay. Here we go. Oh, hello. And it is your usual editor’s update. First of 2026. And we’ve landed actually more normative changes than I was expecting in window over the holidays. We landed `iterator.concat`. We landed a bug fix, which was not something we discussed in plenary. This was just a case where we were miscalculating an index when you’re in the context arrays, you have to update the index by the element size and we were accidentally doing that twice, multiplying by the element size twice in one place, which is obviously wrong, so we just signed that as a bug fix. And then we had a normative PR that was approved a couple of meetings ago and now landed multiple engines, which was not calling single methods for regular ex values, and if you call a string literal.split, this no longer looks up `string.prototype.split` or whatever. Just to make it more robust. And then minor tweak to the module machinery so that projections happen in the same order that successful modular evaluations complete, these are observable with aureate of means. And finally a variable PR, which is adding `Array.fromAsync`. This was blocked on editorial work to that is to say built in async functions. Which I guess I’ll talk about in the editorial changes. So as of this pull request, we are now able to specify built-in async functions, which are just async function that are able to use the await spec macro in their steps. This is done with the same machinery that user-land async functions are. Of course, most engines do not choose to implement async functions in terms of await. Although, they can and some do. But the specification just gets radically more complicated if you’re not using the await macro. It’s possible to do the translation to steps that are purely synchronous and like counting for \[INAUDIBLE\] explicitly, and we felt this wasn’t of service to readers because the way that that is—that contracts machinery is going to be translated into engines is going to differ from the spec steps anyway, so we might as well choose to write it in the clearest way we can, even though that’s not necessarily being to translate directly into implementations, as nothing we do is going to translate directly into implementations and our first budget in function is array from async, which you’re welcome the to look at the implementation in that PR and think in your head about when it would look like to do without the use of the await macro and this I expect—will you understand immediately why it was worth supporting in async functions. + +KG: Okay, and then finally, an editorial change to use abstract closures for Promise machinery. This is instead of having a separate algorithm for, for example, `Promise.all` resolve element functions. We now just do that as a basically an inline closure in the `Promise.call` algorithm at the `perform.call` algorithm. This is mostly for clarity, simplifies a couple of things because we are able to close over in spec steps some of the values that we need instead of needing to have internal slots on the function to keep check of that state. Yeah. One last meta point is that we added a GitHub action that uses get meta that pull from test that does type checking or one of the tools, but a tool from the—that does type checking on the ECMA source text. This tool is capable of recognizing a wide variety of patterns of the—the way we write the prose in the spec, and it now is capable of recognizing a sufficient variety that we felt that it was worth warding when you use praising, which is not recognized by ES meta, which is sort of definitionally phrasing, which is novel to the spec. There’s nothing wrong with doing this. So you shouldn’t treat this wording as something that you need to address, but if the thing that you are doing is something that is, like, relatively routines, just assigning the result of a call to an alias or something like checking the type of value, that sort of thing, it’s probably something that is done elsewhere in the spec, and, therefore, probably something that ES meta could recognize if it was written in a different form. And the general preference is to try to do things to—do similar things with similar phrasing. So if the thing that you are doing is something that you expect is done elsewhere and you get this warning, then this is just a recommendation that you go try to figure out what the typical phrasing for such a step is and do that instead. But, again, if you’re doing something like this example on the screen, for example, where it is not going to be precisely what’s done elsewhere, I don’t know if the one on the screen is a good example, anyway, if you’re doing something that is genuinely novel, writing a new data structure or something, you should feel free to ignore this one. It’s merely informative. Similar list of planned upcoming work. Although I did want to call out, I guess I didn’t highlight it, but we’ve had a long-term item to make internal methods within the spec more linkable, and Nicolo just submitted a wonderful PR to markup to meet internal methods. Egress abstract methods, so this is stuff like the has binding method on environment records, that sort of thing, to make those more linkable. This will require using a different form of the markup or a certain method, so if you are maintaining a proposal that uses internal methods, which I think is just the module machineries proposal, then you may need to make a change to the spec text when next you do a major version bump of ECMA markup. And that’s all that I had on my slides, and I do have a personal late breaking update, which is I will be leaving my job next month. So I will no longer be able to serve as editor, which is why we now need 1.5 additional editors or at least hopefully like to get more people participating in the editor group, because I will no longer be able to participate as editor. I expect you’ll still see me around on GitHub and it’s possible I will see about if someone is willing to sponsor me as invited expert so I can finish up some of the proposals I’ve been work on, but I will no longer be working for F5. You will be ably served by Michael and Rob. That’s all I got. Thanks very much. + +RPR: Thank you, Kevin. There’s nothing on the queue. I will say thank you for sharing the news about you stepping the town down as an editor. And I think everybody here has, you know, significantly appreciated your work over many years in that position. So hopefully you find other ways of continuing to participate in the committee. + +### Speaker's Summary of Key Points + +* A small number of normative and editorial PRs have been landed. KG will be leaving his job and thus the committee in February. + +## ECMA402 Status Updates + +Presenter: Ben Allen (BAN) + +RPR: Next up we have the ECMA402 status update from Ben Allen. Ben? + +BAN: Am I audible? Yeah. So this is a very, very, very short status but update because both RGN and I have been busy with other projects since the last meeting. So we have no editorial changes. We do have a smaller normative change on the agenda that SFC will be presenting later on today. Unicode 17 added the Tolong Siki script, and the normative change that SFC will be discussing lets us add the numerals from that script to 402. And that is it. + +RPR: Okay. Very brief, short and sweet update. Any questions for BAN? + +Moving on then. Thank you, Ben. We have ECMA 404 status updates. Is CM there? + +RPR: Okay. Does anyone else have an update for ECMA 404? Okay, we can only imagine what CM might have said, but until next time. + +TG3, security. CDA? + +## TG3: Security + +Presenter: Chris de Almieda (CDA) + +CDA: You know what I always say, so actually we haven’t had that many meetings since last plenary. It was the busy holiday period and then just some sparse agenda and lack of quorum on some occasion. But you know what we like to do, and that’s discuss security implications of new and ongoing proposals. So if this is something you are interested in, please join us weekly on Wednesdays at noon central time. Thank you. + +RPR: Thank you for the warm invite. Thank you, CDA. + +## TG4: Source Maps + +Presenter: Nicolò Ribaudo (NRO) + +RPR: On to source maps, TG4 with NRO. + +NRO: Also no updates. And working on the range mapping and proposal case happening. We have one spec that’s probably going merged by the next meeting, and it’s the one solving the security ship was flagged by digi 1 when question first published the first iteration of the spec. If anybody was interested in that, please take a look. It’s pull request 211. And that’s it. + +RPR: All right. Any questions for NRO? No. I think we’re good. Thank you, NRO. + +## TG5: Experiments in Programming Language Standardization + +Presenter: Mikhail Barash (MBH) + +* [slides](https://docs.google.com/presentation/d/16D4iZyLdbGxolADupUnLPNGblENiIidWztpu29ccoDI/edit?usp=sharing) + +RPR: And onwards to Mikhail, TG5, experiments and programming language standardization. + +MBH: Can you hear me? + +RPR: Yeah, we can hear you. + +MBH: Yeah. Perfect. So I just had to reconnect because my browser didn’t allow me to share my screen. So, my screen is visible now, right? + +RPR: Yeah, it says click to exit full screen, but I think we’re seeing a blurred pane in a very crisp browser. + +MBH: Do you see the slides now? + +MBH: Oh, I think there is some kind of delay. Yeah, short update from TG5. We continue with the monthly meetings. This month we with will be talking about mechanizing a fragment of the Temporal proposal. And the— + +RPR: Sorry, Mikhail, I don’t know if you moved windows back, but the slides have now stopped displaying. I wonder if it’s a focus. Now it’s displayed. + +MBH: This is very strange. Let me just… + +RPR: So you could switch from showing desk top to showing tab. Maybe that’s the difference. That is clear. That’s better. + +MBH: Yeah, perfect. So, this month, the meeting will be about our experience in using a theorem prover to mechanize the fragment of a Temporal proposal. And we will be continuing with monthly meetings every month, last Wednesday, 4 p.m., Central European time, so please join if you’re interested in that kind of research direction. And we are also planning TG5 workshops for the hybrid meetings this year. So far we only have the one in the Netherlands confirmed in May. We’re trying now to get a confirmation for the 113th plenary TG5 workshop in New York, but there’s no confirmation as of now. And that’s it. + +RPR: Any questions for Mikhail? No? We should move on. Thank you, Mikhail. + +MBH: Thanks. + +## Updates from the CoC Committee + +Presenter: Chris de Almeida (CDA) + +RPR: Then the code of conduct committee. Chris? + +CDA: Yeah. Nothing new. We have no new reports. I think it’s been quiet, which we like. So no real updates from us. As usual, if you’re interested in joining the code of conduct committee, please reach out to one of us. Thank you. + +## Normative: Add 1 new numbering system "tols" for Unicode 17 #1035 + +Presenter: Shane Carr (SFC) + +* [PR](https://github.com/tc39/ecma402/pull/1035) + +RPR: Thank you. So our first normative PR, this is Shane with add one—excuse me, add one new numbering system tols for Unicode 17, PR number 1035. + +SFC: All right. I’ll just share my tab here. There’s not a whole lot to show. So I can give a little bit of background, is that every year over the last five or six years, we’ve had a pull request to update this table. I’ll just open up the contents of the pull request. So we have this table, and this table has a list of all the numbering systems that we require that engines support. And we typically update this table every time Unicode or CLDR adds another numbering system. And in this release, there’s one new numbering system, which you can see here. So it’s—this list is a minimal list, engines are always allowed to support more than what’s in this list, but we like to keep this list up to date. We usually wait about three to four months after the corresponding ICU or CLDR release before updating the table just to give engines a little time to get their updates in. So that’s what happened here. You can see the notes from the TG2 meeting. We had feedback from—we had—this is probably just… + +SFC: Yeah, so we got feedback from some of the implementers, including YSZ from Apple, who confirmed that it’s fine to start requiring this numbering system, because it’s an ICU78 and engine systems are now largely upgraded to ICU78. And that’s basically all I have to show. So we’re just seeking to have the TG1 approval to have the new table for—for updating the table to require the new numbering system. + +RPR: You have DLM with support. DLM, do you want to speak? + +DLM: Yeah, just say we don’t have any concerns about this. It seems fine. + +RPR: All right. I guess are there any objections to this PR? No? Then I think we can say that this PR has consensus. Is there anything more, SFC? + +SFC: That’s all the conclusion. I’ll write my summary in the notes. + +RPR: Thank you so much for remembering the essential documentation that we ask every presenter to write, the summary and the conclusion. All right, thanks, congratulations. + +### Conclusion + +ECMA-402 Pull Request #1035 is approved by TC39-TG1. + +### Speaker's Summary of Key Points + +We have been updating the numbering system table on an annual basis. Support for the change has been vocalized by multiple engine implementers. + +## Upsert for Stage 4 + +Presenter: Dan Minor (DLM) + +* [proposal](https://github.com/tc39/proposal-upsert) +* [slides](https://docs.google.com/presentation/d/1PpQsFGM1V8miLf5Fp5AQO_VASP2WFlu2lM4CkKozrBo/edit?slide=id.g2fb628be09c_0_0#slide=id.g2fb628be09c_0_0) + +RPR: So next up is our first proposal advancement. This is DLM with upsert for Stage 4. + +DLM: Sorry, one moment. I’ll get the right window up for myself. + +RPR: It looks a good slide. It looks like a map. + +DLM: Yes, that’s perfect. Thank you. No, it’s my—it’s my first presentation on a new laptop. I’m happy that I got everything set up beforehand. Okay, so this morning I would like to talk about upsert for Stage 4. Quick reminder of the motivation. A common problem when we’re using a map is how to handle doing an update, if you’re not sure if the key is already in the map or the WeakMap. So obviously you can write code like this. That’s a little bit wordy. And perhaps not as efficient. So upserts proposed solution is to add two methods no that, the WeakMap prototype. One insert that will search for key in the map and return the associated value at present, otherwise insert the value argument and return back, and another version that is called commuted and that takes the callback function and the the exact same thing, except the value comes as a result of calling the callback function. So I believe this is now ready for Stage 4. So two compatible implementations which pass 262 test, so note Safari and Firefox. Safari has been shipping for a long time. Us more recently. And there’s also implementations in Porffor, Kiesel and Boa and V8. And we have sufficient field experience with shipping this in the field, and I also have a pull request that has sign off from the editors. With that, I ask for consensus for Stage 4. + +RPR: All right. So do we have support or objections for Stage 4? I’m seeing two messages of support or three from Nicolo, Michael Ficarra, Olivia, Dmitry, Duncan, so we have lots of support. + +DLM: Thank you. + +RPR: Are there any objections? There’s no objections, so congratulations, Dan, you have Stage 4. + +DLM: Thank you, everyone. Let me stop sharing. + +RPR: You have to imagine the round of applause that everyone is making, but they’re all muted, so that’s the only reason you can’t hear it. + +DLM: I’ll add conclusion and summary to the notes. + +RPR: Thank you so much. All right. Great. That was very quick. + +### Speaker's Summary of Key Points + +* Safari and Firefox, as well as others have shipped implementations that pass test262. +* The specification pull request has been approved. + +### Conclusion + +Proposal Upsert has been approved for Stage 4. + +## Temporal update and needs-consensus PRs + +Presenter: Philip Chimento (PFC) + +* [proposal](https://github.com/tc39/proposal-temporal) +* [slides](https://ptomato.name/talks/tc39-2026-01/) + +PFC: (slide 1) Welcome, everybody, to this lovely remote meeting of TC39 where I’m going to present a status update and a couple of proposed changes to Temporal. My name is Philip Chimento, I’m a delegate for Igalia and we’re doing this work in partnership with Bloomberg. + +PFC: (slide 2) So since the last time I can report that the test262 coverage has been expanded by quite a lot. We’ve noted where existing implementations failed these new tests and filed bugs. We’ve filed specific bugs for things that we uncovered using snapshot testing, for example, but of course, every implementation should also look at their own test262 failures. We have two implementations at pretty much 100% test conformance. This is an important milestone because it’s one of the prerequisites for Stage 4. I'll show a little bit later in a graph, how everybody’s doing. And then today in this presentation, I’ll be proposing two normative changes from the Temporal champions group, both of which are intended to eliminate surprise in edge cases. So we’ll see more later on. + +PFC: (slide 3) So here is the graph that I promised. You can see the red solid bars. These are the percentage of the test262 test that are passed with the Temporal feature. And then the blue hatched bar is the percentage of test262 tests with the Intl Era/Month Code feature that each implementation passes. So you can see at the top here, we have V8 and SpiderMonkey within half a percentage point of 100%. And several others falling closely behind. For the first time in this graph I’ve added the Intl Era/Month Code bars as well. Note that the Temporal bars are excluding the Intl Era/Month Code tests. The Intl Era/Month Code bars are important because we want to move Intl Era/Month Code to stage 4 at the same time as Temporal. So that’s why I’m now tracking how close we are to having implementations that pass all of those tests as well. + +PFC: (slide 4) So this is pretty exciting. It’s really cool to see implementations getting closer and closer. And of course, here we have some news. SpiderMonkey’s implementation was already unflagged on the web last year, and new for this time is V8’s implementation is also available unflagged on the web. So that’s another milestone in terms of prerequisites for Stage 4. So now we have two implementations shipping to users, and we also like to note that the GraalJS implementation is scheduled for unflagging in the next release, although, I couldn't find a specific date for that release. Other things relevant to a future Stage 4 request is that we are requesting Stage 3 for Intl Era/Month Code in this meeting. You’ll see that on the agenda. And then what remains for us to do is investigate the last conformance bugs in implementations. There’s still a small number of tests remaining in test262 staging, and we need to update and expand those as needed in order to move them to the main test corpus. There are still a couple of identified gaps in test coverage that need coverage, and then we should be ready to move Temporal to Stage 4 together with Intl Era/Month Code. So that’s approximately the shape of these plans. + +PFC: (slide 5) I will move on to the changes that we’d like to propose. One of these was actually a really surprising bug. You can see this little code snippet here: if you have a PlainYearMonth instance and you subtract a month with overflow reject, under some conditions, you can just flat out get an error instead of the previous month, as you would expect. And this bug has actually been sitting in the spec text for over five years, so apparently this feature isn’t used much. We found it using the snapshot testing technique I talked about in the last meeting. This is actually caused by a bug in the addition and subtraction code for PlainYearMonth that existed to accommodate durations with days. Now, you might ask why would you subtract days from a PlainYearMonth and that is what my next slide is about, because we are actually going to recommend removing the feature of subtracting days from PlainYearMonths because apparently nobody is using it. It fixes the bug on the previous slide and reduces complexity. + +PFC: (slide 6) This is kind of the philosophy we’re taking at this point in the champions group. If we have functionality that has a bug and the functionality doesn’t seem to be essential, then we consider removing it. That’s the change that we’re proposing, just, you know, make it so you can only add and subtract years and months from PlainYearMonth, and not any other unit. We did make the decision to recommend removing that functionality fairly late, so that PR was not available at the agenda deadline. If you did not have time to review it and for that reason, don’t want to lend your consensus, that’s totally understandable. In which case, we do have a fallback bug fix that keeps the functionality but specifically fixes the bug, that was present on the agenda at the agenda deadline. So if PR 3253 doesn’t gain consensus, we would like to propose PR 3208 to fix the bug. + +PFC: (slide 7) The other change I wanted to talk about: there is a surprise in the toLocaleString method of the various Plain types. The surprise is that they are subject to the system's time zone, which is kind of surprising, because Plain types, their whole thing is not being subject to time zones. So this subsequently went unnoticed, because in most cases you won’t see a difference. You only notice it when there’s a daylight savings shift in the timezone. So this year in my locale, on March 8th, there will be a shift that skips 2 a.m., and so you get this unexpected bump to 3 a.m. when you format the object. And then, over here is the famous timezone that skipped a whole day when they crossed to the other side of the International Date Line and they skipped December 30th, 2011, so if you format this date and your computer is in that timezone, you’ll get a different day entirely. This was discovered by fabon and Adam, two community members who are each developing tools downstream of Temporal, and they’ve been really quite involved in testing and giving feedback on the proposal, which I’d like to shout out. We looked at this in the champions group. We all agreed Plain types are wall-clock times, they should not be subject to the formatter’s time zone. And there’s a fix for this in this pull request right here, PR 3246. + +PFC: (slide 8) I wanted to make a quick summary of the changes that we’ll see in the next agenda item. They belong to a different proposal, but they may affect Temporal implementations. I’ll just note them here. There’s a change to which calendars Intl Era/Month Code must and may support. There’s a clarification of the behavior in date differences with calendars that have leap months. And there’s a PR that fixes the reference year for PlainMonthDay which so far is implementation defined. This PR defines it for lunisolar calendars, particularly the Chinese and Korean ones that have leap months that occur very rarely. So that’s what you’ll be able to look forward to in the next presentation. Have I got any questions on the queue? + +DLM: So far no questions. Just some support for the normative changes. If you’d like to go to those? + +PFC: Yes, please. + +DLM: Sure. I’m first., yeah, I support normative changes, and yes, for first issue, I prefer removing some tracking days as opposed to matching behavior. And also on the queue, Linus, and has plus one for normative changes and \[INAUDIBLE\] + +PFC: Thanks for. If there’s nothing else on the queue, I will move to request consensus for the pull request to make it so that you can only subtract years and months from PlainYearMonths and the pull request to make Plain types not consider the timezone when formatting. + +DLM: Yeah, we’ve already heard some support. I guess we should make sure there’s no opposition. Okay, I think you have consensus. + +PFC: All right, great. Thank you very much. I’ve put a summary for the notes, which I will copy into the notes, and that’s it for me. I guess we’re done a lot quicker than the time box. + +DLM: Great. Thank you, Phil. And, yeah, so, Rob, if you’re there, I’ll hand it back to you. For Intl Era/Month Codes next. + +### Speaker's Summary of Key Points + +We outlined a path to stage 4 for the proposal and listed the blockers. + +Two normative changes reached consensus, to eliminate surprising behaviour in: + +* `Temporal.PlainYearMonth` subtraction +* toLocaleString methods of Temporal.Plain___ types. + +We summarized the related changes happening in the Intl Era/Month Code proposal as it goes to stage 3. + +### Conclusion + +* PR [tc39/proposal-temporal#3253](https://github.com/tc39/proposal-temporal/pull/3253) reached consensus. +* The fallback PR [tc39/proposal-temporal#3208](https://github.com/tc39/proposal-temporal/pull/3208) was not needed and is withdrawn. +* PR [tc39/proposal-temporal#3246](https://github.com/tc39/proposal-temporal/pull/3246) reached consensus. + +## Intl Era/Month Code for Stage 3 + +Presenter: Ben Allen (BAN) + +* [slides](https://notes.igalia.com/p/era-monthcode-stage-3-111th-plenary#/) +* links: + * [PR #99](https://github.com/tc39/proposal-intl-era-monthcode/pull/99) + * [PR #101](https://github.com/tc39/proposal-intl-era-monthcode/pull/101) + * [PR #102](https://github.com/tc39/proposal-intl-era-monthcode/pull/102) + * [PR #108](https://github.com/tc39/proposal-intl-era-monthcode/pull/108)) + +BAN: About to share. I want to say thanks to Philip for all his work on EraMonthCode and for getting through the small normative changes. All right. + +BAN: Okay. So this is Intl Era/Month Code for Stage 3 that we have some small normative changes before we ask for Stage 3. Okay. Just as an overview, it adds Temporal support for a number of non-8601 calendars. \[inaudible] has been a ECMAScript in practice. CLDR and \[inaudible]. Temporal adds calendar \[inaudible]. We would like to have that for non-ISO calendars. We don’t want to have to specify the arithmetic for every calendar, but have guardrails to behavior in order to avoid the urgencies. This is not overspecifying the behavior which ECMA, but within the script, minimizes the need for limitation divergence. Okay. + +BAN: I have gone over this at the last meeting. Yes. We are adding descriptions of the supported calendars. We will see in one of the one remote PR, we have a specified list of the calendars instead of an open list. Error codes and aliases as standard CLDR. And the valid ranges of each calendar and years for every calendar. Okay. + +BAN: And specifics on which support error and numbers. We are adding constraining behavior for when adding years in lunar solar calendars, which present difficulties. They don’t behave like solar calendars. Things like leap months, likewise algorithms for the difference between takes. And CLDR calendars. Okay. + +BAN: So editorial changes. We have had a number of editorial changes related to the Stage 2.7 feedback. All of which we have adjusted. We have also clarified the behavior of the calendar. Essentially behavior is we don’t necessarily endorse this behavior, but like to match the behavior to avoid potential problems related to that. The story is that there was a calendar reform for this type of calendar in 1941. Before that date, the new year was in April. This switched to January to match the Gregorian calendar. All the relevant other systems, the JDK, and .NET treat the dates from that new year change. We are matching that behavior. Again, this is just a will to resist change and essentially going to do the thing we have been doing in the reason why is to match the behavior of other systems. + +BAN: Okay. So then we have our small normative changes. This one is instituted to avoid bugs that already exist at other levels, the CLDR level. So the Japanese imperial regular calendar was reformed in the year 6 Meiji, which was 1873 in the Gregorian calendar. Before that date, Japan used a lunisolar calendar. The calendar reform changed it to a solar calendar that behaves as Gregorian, but with eras marked at the start of every new imperial reign. But the calendar wasn’t actually in use for the first five years of the calendar. Its reforms happened in 6 Meiji. So this PR changed the behavior of this calendar to indicate that dates before the calendar actually came into use are resolved as dates in the Gregorian calendar. This is to prevent problems with implementations treating dates in the old lunar solar. Previously we used a hybrid system, where after 1 Meiji we used the Japanese calendar from one system and after the Meiji we used the Gregorian calendar. Now we are doing the same thing. But starting at 6 Meiji to avoid the problems related to extending the Japanese empirical calendar from 6 Meiji back to 1 Meiji. Okay. So that is one of the normative changes that will be asking for consensus on. + +BAN: The next is relatively more straightforward. There is a calendar, islamic-rgsa. This is for the Islamic calendar as implemented in Saudi Arabia. It was requested by Oracle, but not used for any purpose. This calendar and another similar calendar, Islamic calendar, are essentially all they do, they are invitations to make mistakes because they are not the right calendar to use. They have no usage in the web platform. And no country actually recognizes it. It is essentially a calendar that no one uses—no country uses it and also, nothing on the web uses it. So we are going to ignore it. + +BAN: So this PR specifically causes the value of “ca” (calendar) to be removed from the options of the format when the value is islamic-rgsa. Which again results in requests to use this calendar being ignored. The calendar's only effect on the world is a potential footgun for developers because it should never be used. And since it should never be used, we want to ignore it. So another—that normative change. + +BAN: This also while we were discussing this change in TG2, we decided to restrict the list of available calendars. So the previous wording is the list must include all calendars from the calendar type, the new wording is "the list must consist of all calendars from the calendar types table". There’s a precedent for making this a closed list. The list of units we use is closed to avoid interoperability problems. And calendars are more complex than units. And so previously, by having the open list, we were treating calendars as equivalent to new numbering systems, which are small and not complex, so allowing implementations to add ones beyond the ones that we list is fine. Calendars on the other hand are complex, there could be implementation differences, and so like with units, we want to make it a closed list. + +BAN: Okay. This one (108) is one the more complex ones. We have gone back and forth in TG2 many times, until we could set up a solution. This is the date notice ISO 8601 calendar. They have to be a date that actually exists. Consisting of the month, the day, and a year in which that month and day occurred. So in the Gregorian calendar and ISO 8602, for example, 1972 is used as the reference year for PlainMonthDays, rather than 1970, because 1972 all the days in those calendars existed. The February 29 leap day exists in 1972, so if you want to choose a reference year, that gives real dates, for all dates you want to use, while in that case we have to give the 1972 because the leap day exists. But for some lunisolar calendars, the Chinese calendar, the Dangi calendar and the Islamic calendars, there is no year that contains all months and all month lengths. In the case of the calendars, there are leap months. Months that are inserted in some years. There’s also leap days, within leap months. And some of those leap months are very, very rare. For example, in the Chinese calendar, leap months in the winter almost never happen. It almost never happens, it almost never happens. The last time some of these occurred was well before the calendars in question were all standardized and some leap month and leap day combinations haven’t occurred before recorded history. And will not occur at any point in the foreseeable future. So we have these days that could hypothetically exist, but haven’t existed since before writing. + +BAN: So this PR provides a table of reference years for all of the leap days and leap months said to have meaningly exist. This is the editorial month. These are the values for following the algorithm that is present within the spec text. The hard coded table reference here is for let’s say the non-problematic leap months and leap days. Instead of taking arbitrary picking reference years for those very rare leap months and leap days, we constrain as in, for adding or subtracting—if we are going to be landing in one of those days, we don’t—if we a plan month day that specifies one of those days, we want to reject it, when overflow is set to reject saying those months and days haven’t existed in a meaningful way either and clamp to the corresponding non-leap month or non-leap day when it’s constrained. Constraining to the non-leap version of that month. And that’s it for that one. And for leap days, within leap months, constraining to the last day of the month to match the behavior for constraining February 29th to February 28th. + +BAN: Then we have a bugfix. So normative, normative, normative. Then we have a bugfix. In one of the things that we have been taking steps to do in int era month code is to whenever possible specify things in algorithm steps. A PR from mid-2025 replaced the prose algorithm for NonISODateSurpasses with steps and there were subtle changes within the Hebrew calendar. Just to be doing it. So yeah. The problems come up when—with the current spec text, the problems come up when attempting to determine the apt of time between certain combinations of days, some of which is—so, for example, the Chinese calendar, the amount of time from that day until this later day, in the non-leap month, is currently calculated as one year. That’s currently calculated according to the algorithm steps as introduced in 2025. This returns to the behavior that we had previously. So yeah. First, I would like to open up for discussion before I ask for consensus for these normative changes + +RPR: There is one clarification on the islamic-rgsa topic from SFC + +SFC: Yeah. I noted that the more precise language I used for the situation with RGSA, the specification for the calendar that was requested in the early 2010 was never implemented and because it was never implemented, it’s only a footgun for engines to ship it, because no one implements it as it was specified to have been implemented. + +RPR: All right. The queue is empty. + +BAN: All right. And then I would like to ask for consensus on the normative changes. + +RPR: DLM supports all of the normative changes. + +RPR: SFC? + +SFC: Yeah. This pull request 111 is—has had an interesting discussion and has been almost exclusive between myself and PFC on which approach to take here, and both approaches have merits of their own that are different from each other. So there’s been a little bit of lack of engagement of which approach to take here, but the straw poll among the Temporal champions suggested that PFC’s approach has a more consistent behavior, or or what some would consider as a consistent behavior with what arithmetic work with out of range days of months should be applied to leap months as well. So that is the current state of this and why we would—why the champions sort of have a very, very weak preference for moving forward with the pull request. If anyone else did have a chance or was interested in looking at this and considering the different tradeoffs, that would be very much appreciated. But with the lack of that additional information, I think we are currently planning to move forward with this PR 111. + +MF: Yeah. I was looking at the table in issue 96 that was showing a second ago, and like I was—I noticed that the current result column when you flip the operands, the result is not the negation of the result. And it seems like—in the second and fourth rows, I don’t know how to number them, but you can see the change happen now, where, like, if you flip the operands, the result is now the negation of the previous result, with the operands the other way around. Is it always the case that the result is just a negation, if you flip the operands? If so, that seems like an improvement. + +SFC: Yeah. We absolutely can’t ever completely guarantee what is called… associativity? commutativity? I usually call it round-trippability. This makes one case more round-trippable, but it’s not a general property that we guarantee. The only property we guarantee is that if dateA—dateB is a certain duration, then A + the duration = B. Right? We have that relationship, but we don’t have the other relationship, which is, like, the negation of the formula. We can’t ever guarantee that. There’s two possible—if you have dates A and B, right? And you can do A—B or B—A, those might give you different results besides just the negation. And that’s something we have never been able to guarantee. And this property has been largely one of the topics—been discussed at great lengths among the Temporal champions and the presentations before about, like, how we can’t completely guarantee the property, but take steps to make it so the property applies in more scenarios than others, and that’s kind of the nature of this change. + +MF: So this property holds in some cases where it did not previously hold. Are there any cases where the property does not hold in cases where it did? + +SFC: No. This takes—it only takes cases where it didn’t hold and make it hold + +MF: That sounds like an improvement to me. + +PFC: I think SFC said what I was going to say. Maybe I could add an illustrative example. If you are computing the difference between January 31st and February 28th, then you can choose to return a result of one month because your January 31st + one month would land on February 31st, which doesn’t exist, so it clamps to February 28th. Or you could choose to return a result of 28 days. With the 28 days the result is more round-trippable because you didn’t clamp a non-existent date. That's analogous to what happens here. But by definition, you can’t get that in all cases. + +RPR: Thank you, Philip. Ben? Do you want to go? + +BAN: Yeah. So I would like to ask for consensus for these normative PRs + +RPR: All right. I think we heard support already earlier from one person, at least. All right. And do we have any objections? Any objections to Intl Era/Month Code for Stage 3? No. We have no objections. So congratulations, Ben, you have Stage 3 for this proposal. + +### Speaker's Summary of Key Points + +* Several normative PRs submitted + * Meiji start date + * Islamic-rgsa removal + * Behaviour for very rare leap months/days in certain calendars +* Ask for Stage 3 + +### Conclusion + +* All PRs approved +* Stage 3 achieved + +## Deferred re-exports update + +Presenter: Nicolò Ribaudo (NRO) + +* [proposal](https://github.com/tc39/proposal-deferred-reexports/) +* [slides](https://docs.google.com/presentation/d/1lrYTFTYrlhWTZ1tXdaMdJxcOWA7LU6AFkor339wNEkE/edit?slide=id.p#slide=id.p) + +NRO: So this is an update about the deferred re-exports proposal. I am NRO, I work with Igalia and doing this together with Bloomberg. + +NRO: So if you remember last plenary, we asked for consensus for Stage 2.7, but we couldn’t solve some issues that came up with the proposal. The two concerns were, one about the interaction with the namespace imports proposal, and the second one about spec complexity of having multiple spec methods to evaluate modules. We have been working with these problems. For the second one we have a pull request that starts to solve this problem, but this presentation focuses on the first one. Especially because after talking with delegates, since last time it was not super clear what the concern for the block was, we do not have a solution yet. So I must explain the problem today, and we have some ideas of solutions. And the main goal is to get some feedback from you all. + +NRO: So there’s a performance cliff when using the two proposals together. So let’s see what that means. + +NRO: Assume you have these examples. You have some app that imports some library. This library has a bunch of exports as the library entrypoint. And then when clicking the button it will use the `mainThing` imported from the library. So this graph, this chart on the right, it shows as the cost of doing—using the library. At startup, the initial phase of our app, we have to load the three different files. The library to dependencies and execute all of them. When we use the import defer proposal, if we change in our app, the import from importing the namespace of the library, rather than the whole thing, we get the deferred execution of the library. So we still load these three pieces, but then the execution of the whole library of Q the three files is happening later on, when we actually need it. + +NRO: Another way we can improve the situation here if the library author starts using the export defer proposal. So our application is still using the simple import syntax, but now their library is using library export from their internal files. So what is—what happens now with the export deferral proposal, our app is causing the main library file to be loaded and executed. Now the utility files are just gone because they are defer exports and are not importing them so it’s if they are not there. + +NRO: However, when we try to use both things together, you have one thing that improves performance, the second thing that improves performance, the intuitive thing is to put them together to get better performance. However, what happens here is that we are loading all of the library now. Even if they are just deferred execution of the part of it that we care about. And the reason we’re loading everything is because import defer forces you to use a namespace object, so that if evaluation happens, object property access and the two deferred exports of the library actually do need to be loaded, because they might be synchronously used later. So the only benefit we are getting here compared—benefit of export defer, and just using for defer, is that we are now skipping only the execution of the two internal files of the library. They have been executed, later we access those bindings. + +NRO: So we have some benefits given to us by import defer, some by export defer, but put them together and we don’t get the union of the two benefits. It’s going up. It’s more performance than from performance cliff. + +NRO: So we have some ideas for a solution. We are all just still talking through them. I guess there’s also the option of saying, well, actually, just don’t use them together, unless you actually measure the performance import of the things. Maybe it’s a good recommendation to just not apply performance without measuring them. But still, it would be great to solve this problem. So here are some of the ideas we have been talking about. + +NRO: So one option, maybe—this is the first one we considered, to say, well, given the problem we cannot really make export defer the loading tree shaking. We go back to the origin of the proposal, , so it would not affect it anymore, but execution, which means that when using export defer, instead of getting the tiny bar, we get the loading of the three libraries, we execute the part we need. When we combine proposals we get the union of the benefits. + +NRO: This is making the blue part smaller. And the last column has both its—making the blue part smaller and deferring its execution. + +NRO: Second option here is to allow listing the names that we care about in namespace imports. The reason the import defer proposal uses namespaces, you have to use `import defer * as namespace` because it doesn’t need to import everything, but we need an object to install execution triggers on. So what we need is the object, not an object, but maybe we could extend the, like, object-based import syntax to allow specifying bindings you care about. The advantage of this is it also works with dynamic imports. Dynamic imports is another feature like export defer that forces you to use namespaces today because it can return only one thing—since it is a value, the one thing that can return is a namespace object. And this point was also brought up last time there was a question, I believe, from Mozilla, about an observation about how export defer doesn’t work super well with dynamic imports. + +NRO: And here, there’s an example that also combines with import defer, but the first example is like the first line here. And by doing this, we have the second column is the benefits of import defer, the benefits of import a specific binding and using export defer, then we have the regression, where moving from the named `import {mainThing} export defer`, but then we can use the import defer specifically binding as some object together with export defer and we actually get the union of the benefits here. And the bottom right is the, like, best-case scenario we ever have. This is actually, given the shape of the library, it’s actually loading what it needs and executing what it needs when it needs it. + +NRO: A third option we are talking about is, namespace imports, listing them, but listing the only non-deferred re-exports. So given that this is not explicitly naming anything, it doesn’t load the modules corresponding to the binding the. And import defer behaves the same. This means that if we have—the example here, we are getting the library and library has the deferred and non-deferred export, something will be in the temporal dead zone, at least until somebody else loaded this module. So it calls for something to be present. + +NRO: And maybe we could have some other way, like some attribute to explicitly list what imports we care about. By default we only load the non-defer ones, but pretend there’s an import that is explicitly deferred in the binding, so you don’t need to load it. + +NRO: And again, this would give us the optimal performance profile here, again with the caveat that some access will be adverse, because the corresponding module is not loaded, even though it’s technically reachable from JavaScript code. + +NRO: So yeah. We don’t have a solution yet. I am hoping that we have some sort of consensus in the modules group by the next plenary. What would be useful now to get feedback from you all about, like, what is your first reaction about this possible direction for solution, if anything feels like very good or absolutely terrible? So we know better what to focus on. + +DLM: The queue is currently empty. + +NRO: Like, we do have preferences within the group. I think my personal preference is, if we have to change something in the proposal, it’s to allow you to list some bindings. I know that GB, who is also very active in modules, has a preference, which is Option 3, where we just skip loading some things because the name is not explicitly there. Okay. + +DLM: Now we have a queue. DMM? + +DMM: In general, I like Option 2. But I am curious what the interaction is between explicitly naming things and having, say, an import defer, and then those bindings not being present in the module? Do we get the errors when the module is loaded or as those individual names are accessed, or… ? what do we think the semantics would be there? + +NRO: Details to be defined. But yes, there is going to be some validation here. I guess just like on the spot, I could say either way, I like to check the implementations. If you try to access namespace that are not listed, I expect to to just return undefined. I will expect this thing to be like an object, with the getters that are listed there. Do you have a preference here on how that should behave? + +DMM: I am not sure I have got a preference at the moment. I am thinking about how we can replace some proprietary module stuff we had with something more standard, but we don’t have anything that is explicit. I could live with things being undefined, or with an attempt to access—or with an error being thrown when there they are resolved to not even be found. I think either would be fine. We might prefer some validation steps on the build side to make sure people are importing names that do exist, at least on internal modules. + +NRO: Okay. Thank you. I will reach out to you to learn more about this, just to see if we can find some alignment here. GB? + +GB: Yeah. I just wanted to briefly just confirm that I’ve been really happy with the way that this conversation has been going. It’s been a number of discussions over the past couple of months. And I really like where the discussion has ended up, as sort of a reframing almost towards optionality, that the key here is that this is an optionality scheme, and by making that a little bit more explicit and the intent a little bit more explicit, we can directly avoid the footgun. And I just—one of the things I will say briefly as an analogy, you know, for those familiar with the Rust and Cargo ecosystem, if you think of features in Cargo, by default, features aren’t necessarily enabled, there’s a default set of features. The idea you can pick the features I think is key to avoid the footgun which is all features being enabled by default. And moving towards solutions to that is great. That’s all I want to say. + +NRO: Thank you. I see that the queue is empty now. Please, if you have any feedback, be free to join the matrix modules room or one of our meetings. They are every second Thursday that work for both Europe and Americas. So please come by, if you feel you might need to learn a bit more and have a little bit more opinion about the topic. And with these, I think I am done. Thank you all, again. + +RPR: Thank you, Nicolo. Could you write down the summary. + +NRO: Yeah. + +RPR: For this, in the notes. Thank you. + +### Speaker's Summary of Key Points + +* Last time the proposal was blocked on two concerns: spec complexity and interaction with import defer. The champion, together with folks working on module harmony, are working on it. The presentation focuses on the problem, but drafted three solutions: + +* **Option 1**: Make export defer only defer execution and not skip loading +* **Option 2**: Allow a `import defer { something, other } as ns from …` syntax to allow getting an object representing the module that doesn't force loading everything +* **Option 3**: Make namespace imports by default not load optional re-exports, making accessing them an error unless some other code causes it to be loaded. + +### Conclusion + +No conclusion + +## ECMA404 Status Updates + +Presenter: Chip Morningstar (CM) + +RPR: All right. Great. So a surprise topic CM is back with ECMA404, and I think there might be a meaningful update. + +CM: First of all, I need to apologize. I had this meeting entered in my calendar as being in Central Time. I was one time zone off, and so I was late. Surprise! Most meetings, my challenge is coming up with yet another way to say nothing has changed. And indeed, nothing has changed. The spec is essentially frozen. However, one thing with respect to the spec document itself: since it predates a lot of our more modern work flow, the spec itself has been in the form of a Microsoft Word document. And since the spec is unchanging there was no need to update it to ecmarkup. However, last month, in a fit of OCD that left me absolutely breathless with admiration, JHD produced an ecmarkup version of the spec document. So we now have that. I am not entirely sure what the right thing to do with that is, but it seems like it would be appropriate to publish it. + +AKI: Give it to me + +CM: JHD can give it to you, he knows where it is. I have gone through it, I have verified that not a single word or diagram of it is altered, and so I guess this would fall under the heading of an editorial change. But it’s there. So I just figured I should inject this into the process somehow. And also, extend my thanks to JHD and I would love it if everybody else would also. + +RPR: Thank you, JHD. + +RPR: Lots of emojis in the Google… + +CM: That’s it. + +RPR: ECMA404 continues to go from strength to strength. + +### Speaker's Summary of Key Points + +* There’s a ecmarkup version of the ECMA-404 spec. It is the same content. + +## Withdraw function.sent + +Presenter: Jordan Harband (JHD) + +* [proposal](https://github.com/tc39/proposal-function.sent) + +JHD: So the `function.sent` proposal. Was originally championed by AWB, many, many years ago. Hasn’t come back to committee since—I don’t know—2016 or ‘17 at the latest. I brought up withdrawing it in 2017, or ‘18, I think, and JHX decided he wanted to champion it. In the intervening time he has not brought it back and doesn’t anticipate to do that anymore. I think it should be marked withdrawn and I think that if anyone wants to champion it in the future, that could be pulled back. But I think it’s the correct signal that something that has no movement in a decade is withdrawn. Please let me know if there’s any objection to that. + +RPR: So this has the—sorry. MM is on the queue. + +RPR: MM? Would you like to speak? + +MM: Yes. Can you hear me? + +RPR: Yes + +MM: I wanted to—since this thing has been idle for so long, has been turning over to the committee, I would remind people what problem it solves and get—it would be nice to get a quick reaction, if anybody feels any urgency with respect to that problem. I do not. + +MM: The problem that it solves, or the problem that motivated it is that generators on the push side have a—sorry, generators used as push-pull have a fencepost problem. You have to see the generators with a first-call to next before you can pull anything from it, and that first call to next is—push-pull cases, often it doesn’t have anything to push, that the things to push earn reaction to the things pulled. For that case, what `function.sent` was supposed to do is somehow address that, and I don’t remember how. So if there is anybody that feels urgency with respect to that, I would like to hear about that. I would like to find that out. + +MM: And in any case, I certainly do not object to withdrawing it. + +RPR: Thank you. + +JHD: Yeah, the hourglass is because I added it late, but, like, you know, if nobody is objecting for any reason, I doubt they will object for that reason. + +RPR: CDA? + +CDA: Yeah. This is kind of on the subject of the fact that it was put there late. And I—I think it’s probably fine, and anyway, it could come back for advancement, if it wanted to be resurrected by anybody. I was curious if JHX was contacted—also separately curious the last time JHX attended a meeting. But I was just wondering if there’s been any communication with JHX + +JHD: He’s commented on Github for various issues, but now, since he—I believe he’s still an invited expert, I don’t know how often that’s recertificated. But yeah, he hasn’t brought any of his proposals in. He’s got active proposals, and I’m not trying to withdraw anyone’s proposals for being inactive. This particular one has been a priority for anyone for some time, it seems. Either ways, withdrawals are reversible. At the next meeting, if he wants to come back rechampion it. I want it to be an active proposal to the rest of the world. + +CDA: Sure. + +RPR: I guess we are betting that he probably doesn’t intend to bring it back. So this is the—a correct public signal to send it in. AKI has a comment on attendance. + +AKI: Yes. I believe JHX has been at almost every recent meeting. Not today, but the most recent meeting. It’s not like he’s sort of vanished. You know + +JHD: I don’t recall him participating, but—That’s fine. + +AKI: I wanted to mention that. That’s all. + +CDA: Yeah. And that’s why—that is what gave me pause with the late addition. Has JHX seen this? It’s better for people that are active in committee, to be at least in the loop with somebody withdrawing their proposals. I agree the stakes are low and all that. + +MM: Yes. I really rather that JHX be consulted before the thing gets withdrawn. I think it sets a bad precedent to do otherwise. If I had something that was inactive for a while, and I missed—didn’t attend for a meeting or two, and it got withdrawn while I was absent, I am not sure I would notice, and it might be something I cared about, and yes, I can add it back in. But the signal that it sends might be a signal I didn’t want to send. It doesn’t cost much to contact him + +JHD: Conditional withdrawal on his approval is something I am satisfied with. But given that—But, like, he became the champion, like, 7 or 8 years ago. + +MM: I understand. + +JHD: And didn’t bring it back at all. I think the signal is—the scenario you are talking about of an inadvertent absence is fine. But I will wait for his approval + +MM: Yeah. I think we need to wait + +JHD: That’s fine. + +MM: Okay. Thank you. + +JHD: Thanks. + +RPR: NRO? + +NRO: Yeah. This proposal came up somewhat recently when one of the AsyncContext calls, where—when using generators to possible context you might want, one to come when a generator first called and the other is the one coming from `next`. One will get the default, the one from the generator was called, and the other one will need the developer to manually pass it in. Like with the snapshot. And so one of the ideas just around there was that you could call `.next`, passing an `AsyncContext.Snapshot` and with `function.sent`. You need that, which yields you cannot the argument of the first next call. It was very marginal. There are ways around this. But just mentioning that it was at least talked about by some of the delegates not so long ago. + +NRO: I do not object to withdrawing. I think it’s a nice proposal. But I would not champion it personally. And yes, if nobody is willing to champion it, that is a signal we should send to the community. + +RPR: The queue is empty. + +JHD: I will record the summary as conditional withdrawal, pending JHX’s approval. And yeah. + +RPR: But yeah. I think—I think we have heard support for conditional withdrawal. So based on— + +JHD: Obviously if anyone wants to champion and will work on it, consider my withdrawal request withdrawn. If it’s going to have no activity, then we should be accurate about that as well. I will put that in the summary, in the notes. + +RPR: Okay. Are there any objections to this conditional withdrawal? No objection. So that’s what we go with. + +Okay. Thank you. + +### Speaker's Summary of Key Points + +* No activity in over a decade +* A few delegates like the proposal but nobody wants to champion it +* Champion should approve before withdrawal + +### Conclusion + +* Conditionally withdrawn, pending JHX’s approval, and no other delegates stepping up to champion +* Later update: not withdrawn yet; JHX still interested. + +## Withdrawing Intl.UnitFormat + +Presenter: Chris de Almeida (CDA) + +* [proposal](https://github.com/tc39/proposal-intl-unit-format) + +RPR: And then, I think there is one—another withdrawal from Chris. We have got 5 minutes to cover. + +CDA: I think this should be a little more straightforward just because we have a very clear signal here on the proposal repo that the proposal has been deprecated in favour of another one and this change was made by the champion. So I am asking the committee to formally approve the withdrawal of Intl.UnitFormat. + +RPR: we have support from RBR and also support from DLM. + +CDA: Just a quick question on that: where did the support from RBR come from? Is my TCQ bugged or– + +RPR: That was in Google meet. + +RPR: Any objections to withdrawing this proposal? + +RPR: There are no objections. + +RPR: Congratulations, CDA. + +RPR: You have successfully withdrawn the proposal. + +CDA: I should maybe just state for the record that this was originally, EAO was going to be coming to the committee with this. But he was not able to make this meeting. And so I had volunteered to propose it on his behalf. Thank you. + +RPR: Okay. Then unless anyone has anything tiny, we are at the end of our agenda for the first session. We are three minutes early. It’s time for lunch. No matter where you are in the world and no matter what time zone. Obviously, we should resume at 1 p.m. eastern time. All right. Thanks, all. + +### Speaker's Summary of Key Points + +* Proposal deprecated by another active proposal, [`Intl.NumberFormat` Unified API](https://github.com/tc39/proposal-unified-intl-numberformat#i-units) +* Proposal champion already declared it deprecated on the GH readme + +### Conclusion + +* Proposal is withdrawn + +## Import Sync for Stage 2 + +Presenter: Guy Bedford (GB) + +* [proposal](https://tc39.es/proposal-import-sync/) +* [slides](https://docs.google.com/presentation/d/1TGOmmWkAWx9NkmXCgh_IaoW-4aDhhRQyRSow980EOgU/edit?slide=id.g3b508911f70_0_0#slide=id.g3b508911f70_0_0) + +DLM: Welcome back, everyone. I’ll be facilitating this afternoon’s session. And I guess we’ll start with the ever popular call for notetakers. So great if we could have two people to help out with notes. PFC, you are volunteering? Thank you very much, both of you. That was quick and easy. So up next we have import sync for Stage 2. GB, you Ready? + +GB: Yes, I’m just going to load up my slides quickly. + +GB: Okay, can everyone see that? + +DLM: Yeah, looks good. + +GB: Great. Yeah, so this is a follow on to the import sync proposal, which was presented previously, I believe it was November 2024. And just to give the background again on this one, we have in run times like bun, it requires features like import mete require, and I believe require inside of ES modules, and no JS has a PR to support `import.meta.require` inside of no JS modules, and that was rejected on the grounds that we could potentially bring this discussion to TC39 and come up with a solution that doesn’t just work for common JS, but can work in a way that can work for all module systems, because in no JS, require can now require ES modules, require becomes a kind of generic importer, synchronous importer that can import both common JS and ES modules. So this is kind of the reality that 2 proposal is being created against. + +GB: So we proposed import sync, and we got Stage 1 back in November. And since then there, hasn’t been any further progress. There was also some interest in another import API like `import.meta.sync`. Again, the perspective was that we should rather take this through TC39 than Winter TC, and if possible, come up with a solution. So all that to say there is demand for sync import functions. It’s still not clear how big the demand is. It’s vocal demand from a few as opposed to wide demand from many, but it’s the reality that non-specific and platform alternative are proliferating and, you know, there’s an effort to see these, and there is potentially an ecosystem risk here where non-standard approaches, which `import.meta.require` is becoming reigned in the ecosystem through, for instance, the buttons usage of that in a way that is not—it’s not part of the standard module system, and we now have key module system functionalities being stopgapped by the ecosystem because we didn’t support them. So the question—so that’s the background of the proposal, and the sort of motivation for these discussions, and the question for the committee do we want to see more interest from implementers and folks. It kind of sits in a little bit of a gray area today where I think creating those bridges is tricky, and so I think it’s important to be having these discussions. + +GB: And, yeah, to go through the use cases, one use case is obtaining built-in modules, so your host modules are all modules you import, and if you want to get access to them synchronously, you can’t unless they’re static imports. So you could kind of have these pods that can also enable conditional loading where you can sort of do environment-specific built-in loading. + +GB: Now, you know, Node.js has APIs to do that of course as well. Another use case is getting dependencies from the registry that have already been loaded, so in the browser, that would be the main kind of use case where you’re able to load something if it’s already been loaded. And so you can sort of have checking work flows for optional dependencies, is react loaded? Okay, we can do this special behavior for react or something like that. And there is also a new use case which comes up with some of the new modules harmony proposals, and that’s module expressions and module declaration do not have an asynchronous constraint on them like all other modules have, especially now that we carefully separated top-level awaits in technology wall system through the work in JS there, and the idea we have these synchronous execution pods with import defer that kind of build on that, and so the idea a these could be imported synchronously seems quite a straightforward one at this point, that this would also then potentially align with. So it’s not just stopgapping an existing ecosystem concern. There’s potentially also some new use cases that emerge. + +GB: And so in terms of the actual specification approach and the technical details, all the environments already using sync resolution, so that’s not a problem. That would have been a problem. The import defer proposal has already specified synchronous evaluation in ECMA262, and so, you know, ten years ago, it wasn’t possible. It wouldn’t have been something, you know—wasn’t something that could really be a consideration, but based on the background of the static imports that we have that are, you know, fully integrated into the ecosystem at this point, it’s possible as an addition. And there are a couple of outstanding technical questions, which I’ll get through. But there’s no major known technical blockers. It’s all relatively straightforward. So the first thing we do is we have the not sync error where just the same— + +DLM: Sorry, GB. Just point of order. Your Google view is hiding a bit of the slides There. + +GB: I didn’t realize that was coming up. Is that better? + +DLM: Yeah, that’s better. Thank you. + +GB: Thanks. Yeah, so basically, a new host error is thrown if you try to sync import something that’s not available synchronously, when it’s not available synchronously, if it’s using top-level await. If it is currently still being fetched, and so hosts would decide when to throw this error, and could potentially give extra context. And obviously, there is a kind of divergence here between Node.js and browsers where Node.js can have things work where it uses sync FX, whereas browsers can’t. But with cases like module declarations and module expressions, both potentially can. And so this kind of thing gets to the risks. Do we risk, you know, hitting a known divergence at this stage of the game where, you know, there’s this kind of proliferation of import sync in ways that are harmful to code that runs in both browsers and Node.js or patterns that work in both patterns and Node.js. The nice thing about `import.sync` is it is ugly. Just on the sheer basis that it’s ugly and, you know, less ergonomic than a static import, we should be fine. But it’s worth thinking about. I’ve spoken a bit about browser server divergence and module expressions and declarations. If you’ve got a module expression that returns a function, you can, today, well, not today, but under the module expressions proposal, have an async function that imports that module and then use the function. So it’s just can we do the analogous thing in the synchronous case, and it sort of seems on the surface there shouldn’t be any reason why we should inhibit a synchronous work throw there. And this relates directly to the import now specification, which was—well, the import now functionality, which was recommended from the compartments proposal, I believe, which was also looking to have this feature available. + +GB: And the same can apply for module declarations and multiple module declarations if all the dependencies are synchronously available. And even for dependency that are imported, as long as it’s in the registry, it can work in the browsers. In if case, you’ve got a module declaration that is importing from the network, but because we’ve already imported that or it could have been an import defer, it’s already available in the current module execution context synchronously, so it could work there as well. + +GB: So there’s potentially some nice interactions there. When we discussed this back in November ‘24, one framing was import sync would only work for modules which are already linked and loaded, which is to say Node.js shouldn’t support loading new modules in modules with import sync, and we could explicitly deny that so be ensure browsers behave exactly the same way from an execution perspective. But this is highly restrictive to hosts. And I think should be seepage non-starter. So I think we should just avoid this approach entirely and just accept that Node.js and browsers will behave entirely differently here. + +GB: Finally, the alternatives to this proposal, so you could have registry getter functions. You need a resolved ID with import attributes and also need the import attributes these days, so they get a little bit tricky. But you could have registry getters. The question is then what does it give you for thing that are in progress. You need kind of module progress records. The other thing is as I mentioned earlier, built-in specific loading APIs like Node.js does for the built-in use case, and Node.js did add support for get built-in module in the process label exactly because of this problem. So every run time ends up needing to have its own get built in module function on every single host, which, you know, seems like something we should be able to have a solution for in TC39. But, yeah, if we don’t do import sync, we can continue to develop these alternatives and defer an optional imports functionality to solve these lazy execution case, and the only gap we’re really left request, you know, if we do the registry getter and the built-in module getter and all the lazy gettersers is the full lazy loading in server side applications and then potentiality module declarations and module expression synchronous cases, so we can kind of try to wiggle down the use cases is the other approach if we don’t want to move forward with this, is just keep tackling it from the edges and see where we end up. + +GB: Outstanding semantics, if you have two import syncs, you can get a deadlock. We actually do have multiple deadlock systems at 2 moment, so import defer is also deadlocking. And top-level await is also deadlocking. So it’s not a problem that’s unique to import sync. But it is something we need to think about. + +GB: Another outstanding question is the source and defer phases. So just because you can import sync, you know, why could you not import sync into other phases when it’s supported or other use cases for that? Well, we haven’t assessed that too deeply from a use case perspective yet. I think a lot of the use cases for source and defer are based on network loading, so we need to think about that and if that’s something we want to decide on. I’m hopeful we can treat that as a Stage 2.7 question. But if we want to decide that for Stage 2, that can also determine how we think about progression. + +GB: So the current status, the spec is written. There’s one major to-do, but it’s mostly just on the loading side, so it’s just kind of a spec refactoring wiring, but not kind of a technical issue. The main technical parts are written. The semantics are all defined down to the two points I mentioned. And as I said, from an implementer perspective, it’s clear that server run time is wanted and it’s something we could support at cloud flare, and it’s still unclear if what the browser implementer interest is at this point, though, as well. So it would be good to hear from browser implementers as well, on this proposal. I guess I should have put discussion first, but, yeah, discussion and then we can do a Stage 2. + +DLM: Okay. First up we have JHD. + +JHD: Does this work in scripts? + +GB: That is a good question. + +JHD: So that I’m being direct, dynamic import also works in scripts, and I think this should too—and if there’s a reason it can’t, then I’m concerned about advancing it. + +GB: That’s a—that’s a great point, and, yeah, I think that sounds sensible. + +DLM: Next LVU. + +LVU: I just wanted to express support. If I’m—if I understand this correctly, this seems great for handling optional dependencies, which is something I personally need and I believe is a common need, and the current work we have is not great. Basically, if module X is loaded, that’s the sophisticated thing use it, but you don’t always want to include it as a dependency, and if it’s not loaded, just do a simpler thing. So, yeah, thumbs up. + +GB: Yeah, on the optional, it would be a try catch around the import sync, which is maybe not syntactically the nicest thing. If you were in the browser, so, yeah, I wonder if that would still meet your requirements for the use case. + +LVU: Oh, it’s still better than what we have right now. I was under the impression it would just return undefined if it happens to not be loaded. + +GB: Sorry, I can’t see the queue. But— + +DLM: I’m next. I’ll go. Just you mentioned implementer interest in the server side. I wonder if you’ve had any commitments from node or bun that they would switch to this. + +GB: Node would definitely adopt this. I have not reached out to bun. That would be worthwhile. + +DLM: Thank you. + +DLM: LVU. + +LVU: I was wondering if there’s a better way we could handle deadlocks, like, just do some sort of cycle detection, throw, return undefined, whatever, anything other than deadlock seems better. + +GB: That is—that was actually one of the concerns that James Snell raised, and that is one of the things that we will be looking at for Stage 2.7. So to be clear about, like, the staging, Stage 2 is not a commitment to progress the proposal to Stage 2.7. Obviously, but it kind of makes it clear that we’re seriously considering this at TC39. But,, yeah, deadlocks would be something we would work through and potentially determine if something could be done there or whether we want to even just have stronger host implementation advice around that. + +DLM: Next Kevin. + +KG: Yeah, in feels a lot like synchronously unwrapping the promise returned by dynamic import, which is a thing that people, like—the ability to synchronously unwrap promises is something people ask for and indeed very useful in many scenarios. I guess I am personally more comfortable with this proposal than a general promise and unwrapping mechanism. However, I would be more comfortable still if we could articulate what general rule allows unwrapping this particular kind of promise or distinguishes this from unwrapping promises and doesn’t allow unwrapping promises in general. + +GB: Yes. So this is—you know, the motivation is a function coloring problem, and what makes this function coloring problem distinct from all the others, and that’s a good question. But I think one answer is that we already do the unwrapping, because import defer does synchronous evaluation. So that ship has already sailed. Well, it doesn’t do network unwrapping, but it does the evaluation unwrapping. + +KG: That’s fair. I am a little hesitant to accept arguments of the form, like, we are doing this one thing, if you look at it the right way in a certain obscure light using that feature that is not going to be as widely used as the new thing we are proposing. I would feel better if there was a different answer. + +GB: Yeah, I think, you know, in terms of the semantics, you know, this is—synchronous module evaluation, I think, is something that this—the question is, you know, when you think about the use cases of obtaining a built-in, getting a module from the registry, and executing a module that’s already compiled and available, you know, all of those things are naturally synchronous operations, which we do not expose as asynchronous operations today. + +KG: Okay. That’s fair. So perhaps the answer is you shouldn’t think of this as unwrapping a promise, you should think of this as some others—like, just exposing a part of the module machinery which was already synchronous? + +GB: Exactly. And the host loading algorithm in ECMA262 that NRO refactored is designed to allow a synchronous operation in a callback mechanism so that that’s actually the to-do, that refactoring to properly align request that. So this is not just taking a promise path and simplifying it, the spec is actually written in a way that it is not creating the promises in the first place. + +KG: Okay, so does that imply that you cannot do a synchronous import of a module which has a top-level await, even if that top-level await has already settled? + +GB: The—so the import defer had to deal with this problem, and the way that import defer does it is it does just take the promise and unwrap it. I would like to refactor that code path in import sync well. I have not. For now, I’m just building on that same code path. I would like to refactor that, or if NRO wants to refactor it separately to avoid the promise creation entirely when we don’t have to do, what we would probably do is keep the wrapping in the case where you have an async module and do the unwrap in one that specific case. Actually, with don’t need to do an unwrap there, because the namespace is already available, I think. It’s a short exit path on the evaluation function. And the error is also cached on the module record object, if my memory is serving me correctly. But I would need to verify. That’s good follow-up. Would your feedback there be you would prefer to avoid promise unwrapping in general in the spec? + +KG: No, this isn’t about, like, how it is editorially specified. It’s just about what things show up for users. + +GB: Okay. + +KG: It was a genuine question about, like—I had thought that perhaps an implication of your previous statement was that this wouldn’t work for async modules because they genuinely require the promise machinery. But if that’s not the case, that answered my question. It was just a question. + +GB: Okay, yeah. So that is what’s done today for import defer. It does—Nicolo could potentially clarify. I haven’t checked it recently, but I believe it does just say if the promise is already resolved. + +NRO: Yeah. Yeah, I’m here on the queue for that. So if the module—if the async module is already awaited, the import defer allows synchronous reading its value. It is already possible, you know, without import every to synchronously run some code right before and after getting the value of an already available sync module, because an already synced module already behaves 100% like synchronous module, so if you have, like, a kind of like a sandwich of train parts where the middle one is already evaluated async module, the—like, third import will run without—like, will run after the value of the second one is available and can, like, reach into its value. I can share an example on Matrix to clarify. + +DLM: Okay, we have about five minutes left. PFC is next in the queue. + +PFC: I can share another use case that I’ve run into that people have requested from the JavaScript interpreter that’s embedded in the GNOME desktop. People really want synchronous imports at interactive evaluation prompts. I looked into whether we could implement this, and it’s been a while, so I may not be remembering this entirely correctly, but from memory, I looked at how the Node.js interactive prompt does it and I looked at how Firefox dev tools does it. I believe both of them embed a copy of Babel, parse the user input, look for import statements, and then automatically rewrite those to a dynamic import and await the promise, et cetera. And so it’s actually really quite heavyweight for something that’s just an operation that users apparently want to do pretty often at interactive prompts. I wouldn’t say this is the killer application for import sync, but it would be quite handy for interactive prompts. + +DLM: Next up is DMM. + +DMM: Yeah, I think I’ve got a similar sort of thing to a lot of people, where predominantly work on the service side, the import operations are inherently synchronous. We are trying to move things in modules, but existing users are often synchronous scripts and don’t want to parse all of that machinery to change, and at the moment, we’re using a hack around things that looks like require esm in node. So having something that is well specified and standardized would be extremely helpful in this. + +DLM: Okay, and I’m next on the queue. So you asked for implementer feedback from browsers. So our general sentiment is, like I said in my mess, it’s just adding complexity to an already complex system, so I don’t think, and I asked some people who are working directly on our module system, and the general idea is, yeah, this feels like it’s adding complexity. We’re not going to block it by any means, but I guess we’re not terribly excited about having to implement it, and I notice these concerns were brought up by SYG the first time this was brought up, and I think he’s on the queue after me as well. So he has a plus one in the message. And I will stop so Olivier can speak. + +OFR: Yeah, I think definitely plus one that. I’m actually on the queue for a question. I’m not sure if I fully understand it. It was brought up should it work in script, and also if I looked at the spec text, there was even a note where it would say something, like, okay, you can use import sync in the on click handler, I think, or something. And I really didn’t understand how this would work in the browser because it’s pretty much racy when that module will be available and when that import sync in the on script handler would then, yeah, resolve or throw an error. So that seems quite strange ergonomics to me, and I couldn’t think of a way you could synchronize this and make sure the script is executed after the module is loaded. I think it would not be possible, but maybe there is a way, so, yeah. + +GB: Yeah, so in the browser, basically what you’re referring to is the dynamic import in a script. That hint is specifically for normal dynamic imports. I will check the spec. I’m trying to see if it’s—if I changed that script hint in the spec text to be an import sync hint. No, it’s the same. So the button example is an example of host load imported module in the existing spec today for dynamic import, and that’s not changing. So that doesn’t say import sync in the spec. The HTML onClick example. That’s dynamic import. But import sync would behave the same, but basically with defer, you have the ability to import a module that is not executed. So it would go into the registry, get the module record, if there’s no module record in the registry or if it has to go to the network, it would throw this error that it’s month available synchronously. But if it is in the registry and it is available, not executed in the registry and it’s dependencies on ISG and none of them can wait, it can do a lazy execution so it can work with import defer. And then we have this use case for module expressions and module declarations for executing module sources, which seems to lead quite nicely into your next question. But let’s first clarify if that answers your question, OFR. + +OFR: Yeah, I’m not sure. So would we then block and wait for it to be evaluated? + +\[crosstalk] + +GB: There’s no blocking semantics. + +OFR: What if the request is in flight? + +GB: It will throw and say it’s not available. And this is the same thing require does in Node.js. If you require a module and it has top-level await, this will say it’s top level away. And I think if it’s in flight, I think the same thing applies for Node work. + +OFR: There would not be a way of, like, expressing I want this piece of code executed once? Well, I guess that would be normal require, I want this piece code executed once, the module is available. + +GB: Right. That—so the way to queue on the module being available for synchronous execution is import defer. You would do a dynamic import defer, and once it’s available, then you can do an import sync. In a sense, import sync is just a way to execute a defer module that has no exports, in a sense, or to just touch that defer module basically. And then these other use cases of module expressions and module diversions, and if we ever had built-in modules in browsers, it would support that as well, obviously. + +DLM: Okay. NRO is on the queue. First I’m just going to say that we’re actually over the time box, and since there’s an under flow this afternoon, I think it’s worth continuing this discussion, so you can ask for advancement. + +GB: That would be great. I think we could maybe fit this in an extra ten minutes, if that’s possible. + +DLM: That sounds good to me. + +GB: Thank you. + +DLM: NRO. + +NRO: Just one question, in browsers, there is link for module preload, and marker like start preloading something. So I’m not totally about this, how it works in the browser label, but I would expect, like, rather than kind of abusing defer for this in browsers, you would allow the browsers to put out a model with model row pre load and then it’s in the cache and you it’s in the cache and you can run import sync on it. + +GB: I would need to check if there’s a load event—I don’t think there’s a preload load event that you could do that on. So it wouldn’t be something reliable without some kind of notification, I think. + +DLM: KM. + +KM: I guess maybe I missed this, but I think it wasn’t covered, and it just feels, I guess, in some ways like a simpler to understand design, to me anyway, that, like, you would—this API would take a source module, source-based module you get from whatever mechanism and synchronously, since that sort of side steps the issue of is this thing loaded. Obviously that pushes the problem downstream, and in the browser, you still need to know your thing is loaded, but at least you have a direct dependency rather than this implicit dependency on some external resolution system having finished something opaquely to you. It just feels to me like the Legos would fit better in that sense. And then I think it feels also probably easier at the—to provide—maybe it would be—it would seem easier to me to have node specific or other APIs that are like fetch they ministry this node face module synchronously from disc in mode that is not part of the web browser specification, but is part of modes, and node that has a way to do this synchronously and everyone is happy, and on the web, you have to figure out how to await your promise to get the source module. Obviously that creates a dependency on source modules, which you may not want, and that’s fair. I’m just curious if it’s been thought. Maybe I missed this. + +GB: Unfortunately, source modules don’t solve this, because when you obtain a source module for JavaScript, as phase imports proposal, it does not also obtain its dependencies from the network, so that’s one of the key features of the source phase, is that it is not dependency fetching. And so if you have a source phase representation of an object, you cannot synchronously import it because they’re still network work to do because its dependencies resolutions nor not yet determined. What you’re saying about a handle, rare flying this, is exactly what he did with import defer, right? So import defer is that handle for a synchronous evaluation of an instance that will do all the work to make sure it’s synchronously available, and that’s that feature today. I think the thing is even with that, we haven’t solved these outstanding use cases that the ecosystem needs to solve. And if we don’t solve those use cases, they will just—they will be solved, but just not by us, right? + +DLM: Yeah, CZW in the queue. + +CZW: Another point was mentioning about synchronously importing module source object, is that there’s no way in the current spec that allowing to evaluate an ESM module source object. So importing a source phase object does not enable the use cases that import sync enables at the current spec features. + +GB: So import sync would also work for the source phase, as a synchronous evaluator in the same way that import can work for the source phase. I guess is the point that there isn’t currently asynchronous evaluation of the source phase? Sorry, I’m just trying to summarize. + +CZW: My point is that we don’t have a way to synchronously evaluate an ESM module source object in the current language. + +GB: Yeah. So the—that framing would also apply to the module expressions and module declarations example, where we’re currently proposing that the representation of module declarations and module expressions is the source phase, and under that representation, import sync would be able to synchronously execute source phase, yeah. + +DLM: Next we have NRO. + +NRO: Yes. It was mentioned multiple times that it would be like the behavior for Node and the behavior for browsers. In browsers, in web workers, we already have import script, which is a synchronous function that loads and executes a script, because workers can afford to block because of a thread. So maybe the distinction that the web integration of this browser makes should not be browser versus Node, but should be main thread versus everything elsewhere everything else can probably defer to block. + +DLM: Okay. And that’s the queue. We’re almost at the extended time box. Guy, would you like to ask for consensus for Stage 2? + +GB: Yeah, sure, if I can just, you know, briefly caveat the request for Stage 2 and just say that there are no guarantees that this proposal moves forward at ECMA262 and we are not looking to come back for Stage 2.7 in a hurry here. We’re looking to gather feedback from the ecosystem and demonstrate to the ecosystem that there’s a commitment from TC39 to continue to see progression of this discussion. So any Stage 2.7 follow-up would be based on strong implementer feedback. It would be based on working through the semantic concerns raised, and with clear implementation intent so that approval for Stage 2 is by no means getting us on the final straight for 2.7, but merely demonstrating a commitment to the proposal discussion at TC39. Okay, so with that, I would like to ask for stage 2. + +DLM: I believe we heard some support earlier, but if anyone would like to reiterate that support now, that would be helpful for notes. LVU is one. And DMM as well supports Stage 2. Does anyone have any concerns or, like, concerns for Stage 2? CZW is also plus 2. And I guess we should ask for reviewers. I don’t know if we do that now or do it offline? + +GB: Yes, please. Reviewers would be great if you could let me know now or separately. + +DLM: There’s also plus one from ZB, and he doesn’t have access to TCQ at the Moment. + +GB: Otherwise I’ll nominate a reviewer. + +DLM: Any volunteers? NRO is volunteering. We need more than one? I can’t remember. + +GB: It would be nice to have more than one, but we can follow up on it as well. + +DLM: Yeah, I guess, yeah, we probably shouldn’t take more time with this. Congratulations, thank you. + +### Speaker's Summary of Key Points + +* `Import.sync` is feasible and solves and ecosystem gap. +* Any Stage 2.7 will not be rushed but be based on clear implementer support, resolving semantic questions around deadlocks and other phases, and with strong motivating use cases. +* While other solutions may yet be found, ensuring we continue to develop on this proposal demonstrates a commitment to ensuring standard solutions to the problem of synchronous module evaluation and registry lookups. + +### Conclusion + +* Stage 2 obtained, with Stage 2.7 criteria as outlined + +## Error option limit for stage 1 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/BridgeAR/error-limit-option) +* [slides]() + +RBR: All right. So like since we recently had the proposal for the stack proposal, I sought to also look more into improving errors and there is also the ongoing effort, for example, error stack trace, captureStackTrace, about if that should be an API across, and the platforms or not, and since I don’t believe so, I looked at which APIs do exist in V8 that are specific to V8 at the moment, and where we could instead standardize on something that provides more of how users are effectively using these APIs and to have a better experience overall, because, like, otherwise removal is also not really an option. + +RBR: And one of these is an about alternative for `Error.stackTraceLimit`, which limits stack frames to an upper bound. Now, I learned in Japan that we actually don’t always have a concrete number in all of the engines, so what stack frame effectively stands for is something that we would still have to define nearer, but I hope that’s not for Stage 1, but more for a later stage where we can explore more of the problem space in detail. + +RBR: So my proposal is effectively about adding to the new options bag, which we have anyways with the error cause property, and to add a limit to an individual error, because most of the times whenever stack trace limit is used, which would look like, for example, here, it’s—well, it is a global level where any following error would now have that upper bound. And, like, this can be error prone in case there’s an error thrown in between, for example, and you just want to change for an individual error, the limit, and now it’s just globally set forever. Because it’s not reset anymore, so, like, the reset is gone from a problem, and In most cases where I have seen this API being used, it is always about a specific error. Sometimes it’s used for removing all stack frames, which is a relatively coming use case, so that the performance is not ever needed to calculate all the stack frames. And also, like, sometimes only the 1, 2, 3 first frames, because you need, like, where—in what pile was the error originally created, but nothing else. So—and these are common cases, and that’s like for limiting it, but sometimes you also want to have a long stack trace, where it’s better for debuggability, and you can also increase that. And it’s something that handling from an implementer’s perspective, it’s often much better forking the it on a specific error where you really care about I need to have all the information when this particular one happens, versus having it on a global scale. Does that require `Error.stackTraceLimit` completely? No, because of that partial use case for sometimes still having the wish for changing the global limit completely, especially for longer stack traces. But for the performance case, it’s actually pretty much solved completely, and that is as far as I can tell, from code that I’ve seen, more than 90% of all use cases of this API. + +RBR: So like I believe, this very simple and very narrow suggestion solves most of what the users actually care for when using it, and it has a nice usability thing by just adding that simple option, because that is something everyone would understand. Not everyone knows `Error.stackTraceLimit` when they are working in a browser or node or a different environment where V8 is being used. And that’s pretty much it. I mean, in this case, I believe the proposal is so simple that I don’t know how many questions there are or if there’s nothing else for me to go into in more detail. Please feel free to ask. + +DLM: Okay, first up in the queue we have NRO. + +NRO: Yeah. So I support this proposal. I think it is useful. I especially when working—I guess when working in Node, because it’s the only place where I can do it, but it’s very common for me to set the temporary limit for some error and then reset the limit. There are two main ways in which I use this V8 API. And one is just global to infinity when I’m trying to debug something, and I—and it’s great that I can globally override that as I’m debugging, too. Libraries that make this trace too short are annoying. They don’t need this proposal to make it short. They already do it today. They just do string manipulation or get the error message and just log to STDERR and logging the trace, so this proposal is probably not going to make things worse, and I just want to point out that we should not encourage user style patterns. And like I said this proposal is useful on its own. + +RBR: One thing I also didn’t mention is, like, the precedence order, which also discussed with OFR in Japan, and in this case, the local limit would always win over the global Limit. + +NRO: \[inaudible] would be like the higher one. I don’t know yet, but this is not something to decide for Stage 1. + +RBR: Yes, correct. + +NRO: And I also have another topic here, the scope of this proposal. You mentioned, like, defining exactly what it means to have N stack frames. I would actually recommend against doing that, unfortunately. Defining what is actually there has been historically incredibly difficult opinion for us. There is a proposal that tries to focus on that. And as much as I dislike this, what I would recommend is we define that there is this property and this object, and there are constructors in this property, and maybe cast it to a number, and then do nothing with it. And if actually something from define happens with it, because the contents of the stack traces are to the implementation defined. We already do something similar for the micro wai proposal, where there’s some semantic that gets a number and there’s nothing with that number, because it’s meant to disappear or something. I would recommend doing the same thing here. + +RBR: Yeah, in this case, I already put in here that defining the stack frame, I didn’t phrase it right. I see that now. A stack frame is implementation defined. That’s how it should say. Because it is so difficult, so for me, that’s totally fine that it’s up to the implementers to say what is the frame or what not. I think that’s okay. Of course, it would be good to get some feedback from implementers about it in this case. Maybe DLM, OFR, I don’t know. + +DLM: Next is SHS. + +SHS: Yeah, I just wanted to ask, the kind of old style version of you set the error stack trace limit temporarily, do some things and then set it back would cover when the VM is actually constructing the errors and throwing them, whereas if you’re doing this, you only get user constructed errors covered? + +RBR: Yes. + +SHS: So I just—you know, is there discussion about that, or is that something that would be a loss if we are to switch to this kind of pattern? + +RBR: In this case, it’s just a different thing, right? Because, like, when you implement something, you sometimes know you definitely only care about a specific frame, and that is maybe, like, one or two methods away. Or, and maybe you don’t care at all about some or you, you know, users should always see the whole stack frame. And that’s why this is used for programmatic errors in a library, instead of for programmer errors that we could runs into which would fall back to the global limit. + +SHS: All right. + +DLM: I’m next on the queue, so my question is about the global error stack trace limit that’s currently in V8. I believe it’s also in JavaScriptCore, and I guess sort-question question, does it realistic to believe that V8 could ever stop shipping this? And then a related concern is if it’s already in V8 and JSC, it might be a matter of time before we have to add it because of some web compatibility reasons. Yeah, I guess I’d like to hear, you know, is it possible that this could be a replacement for that, and if not, should we maybe standardize the global version and say please don’t use this so at least it’s part of the language? + +RBR: I’m not V8, but to answer your question from my perspective, I don’t believe we can remove it. I do believe it will replace roughly 90% of all the usage of this API at the moment. So it will significantly drop, and the people who do use it will have a way better user experience. Do I believe we need to standardize the global one as well? Maybe, but on the other hand, what I thought about is, and let’s start with this one as—and like, addressing the big bunch of the actual use case of how this API is currently used in the wild, and then sitting down to—and maybe the solution, yes, we do add that global API as well in addition, but maybe we find another one for the global one as well. + +DLM: Sure. Okay, thank you. Next up is MM. + +MM: Hi. I realize that everything I’m about to say about this proposal actually applies to both this proposal and your next one, and I just wanted to mention that and then I’ll postpone all of my questions to the QA on the next one, but they’ll apply to both proposals. That’s it. + +DLM: Okay. KM’s on the queue. + +KM: Yeah, I guess this was actually to your comment, DLM. I guess I don’t—maybe there is something you were thinking of, but it doesn’t seem obvious to me how this proposal would change compatibility or, like, for `Error.stackTraceLimit`. Because the—it doesn’t seem like it would increase the usage of that API, if people wanted to use that API, they’re probably already using that API. And, like, adopting this would either be in addition to that or, like, replacing that. And in either case, I think it wouldn’t—it doesn’t seem obvious that it would change it. Maybe there’s some other case you’re thinking of that I’m not thinking of. + +DLM: No, it just occurred to me that I’m in a situation of trying to capture error stack trace, and this one could end up being in a similar situation where we end up with people—we prefer people not to use and it we have compatibility problems. That being said, I’m happy to wait and hope this won’t happen and we don’t have to, like, implement both. And I guess, you know, if we implement one, the implementation of the other one is trivial anyways, so it’s not that big of a deal. I just wanted to raise it. + +KM: I just wanted to make sure I wasn’t missing something. + +DLM: No, thanks. Yeah, OFR. + +OFR: Yeah, just relaying something I picked up, one reaction to this proposal was, shouldn’t the global limit take precedence? I don’t want my libraries to hide errors from me. So, yeah, just something to think about. Maybe it’s not as clear cut which way around is preferable. + +RBR: How would we handle it? Because, like, at the moment, there is an implicit default, which is 10 in V8. And, like, it wouldn’t be like if the first time—if first time it would be changed that it would overrule the local one. + +OFR: I have no clue. I’m just relaying, like, a reaction to this proposal that some people would probably prefer libraries not being able to hide error stack frames from them by default. That’s all. + +RBR: And for what errors, you know? Because I believe when this is used, it’s a very particular use case where an implementer would have a conscious decision for it, and they are doing that effectively today already, just with an API that is not very well suited for that job. + +OFR: Yeah, I mean, it can obviously encourage libraries to set that limit on their errors, and then you would not see the rest of the stack traces, and that might be something that to some users of these libraries might not be desirable. That’s all of my comments. + +DLM: Next is KM. + +KM: Yeah, I guess, like, you could imagine, I guess, to second that point of, you could imagine some error that a library author never intended to escape out their code, but then accidentally does escape and they set the limit to zero and now you have some debug infrastructure in your, you know—I don’t know, pick your favorite error reporting tool for your website and it just says, oh, you got some error but it hasn’t a stack trace because, and you’re trying to figure out what error this is. And so I could imagine that being inconvenient for some people. But doesn’t necessarily mean we shouldn’t do this API. + +RBR: And, like, again, in this case, the question is for me, what does it change towards today when the users use the global version, and in this case, they—the outcome is identical. It’s more of, like, they cannot do a mistake anymore about not resetting the limit, which would be a problem, because as soon as it is zero, and now it wouldn’t be reset to the former state, then all errors afterwards would be zero. That’s a big problem. And, like, I believe that’s more of one of the big problems there about—I would mostly reflect how this API is currently used, and that means we do set it over the global limit no matter what. + +DLM: Okay, next on the queue, I’m just curious, since you mentioned several times and also came up in our internal review that’s really annoying when people have parts of the stack trace. What are the legitimate use for having this and why are people using this, if you have any insight? + +RBR: Definitely for performance reasons, that’s a very common reason in Node. And also like in my company, we use it for, for example, debugger aspects we don’t need full stack frames. And calculating the stack frames is expensive. It always. I don’t know about all the different, like, error implementations in different engines. But I know in V8 it’s expensive. By setting, for example, the stack frame limit to 0, round about halves the overall CPU cost for generating the error or even lower. It’s definitely a cost perspective. Sometimes, you also just know for certain that frames on top are not relevant for the user. Because it would only be for a limited frames that the user might be interested in in that case. Because errors are not only used for actual error cases, but sometimes for different information transportation. And so we really have multiple cases where there are currently limited as such + +DLM: Yeah. Okay. Next in the queue is NRO + +NRO: My use case here is actually the opposite, to make the stack traces longer. Sometimes I know I am deep in the library that the default just shows the library-internal frames and not the user frames. So if we need to throw an error that the user needs to, like, act on, not just an internal bug I want to look at, this is where you first call into my library. + +RBR: Yes. And in this case, why the longer one? Because doing that for all errors is actually very, very expensive. So the user might explain if their application becomes slower, if it’s just—especially when you change something from a library perspective to change user’s code, that’s a no no, and therefore you are able to do it for your own library errors at least. + +DLM: Next on the queue is LVU with “The performance use case seems to argue more for a global knob than a per-error setting? EOM” + +RBR: I don’t see that as such because changing the limit to lower value is something that I will only do very cautiously about individual cases. And that’s also how it’s used, like definitely, and when I look up, it’s 90% used to set it, create the error, reset the limit. And formally, you save the limit in the variable. And so it’s about limiting or increasing the limit for a specific error and not globally, because the change the application’s error to strongly otherwise. + +DLM: Okay. Next up is MM + +MM: I see that this and the next one both are going for Stage 1. Which I don’t see any problem with. But I don’t want to separately discuss advancement. I would like to have a discussion after the next proposal that covers advancement of this one as well as advancement of the other, and I still—I appreciate the fact they are two separate proposals and one might advance and the other one might not. But I think we should postpone the advancement discussion until then. + +RBR: That does make it slightly more complicated. Does anyone else feel the same way? I don’t see why necessarily they need to be considered together. + +MM: Because a lot of—a lot of the issues that will come up in the discussion apply to both proposals, and I think that we can’t really have an informed discussion. Whether this one advances is not orthogonal to whether the other one advances. One could and one did not. The question about whether each should advance should take into account whether we—whether the other one advances. Also what the content of the other one is. + +DLM: Yeah. That’s fine by me. I will just remember to call for consensus separately at that point. I mean, my only other question is, if there—there are no blocking concerns for this one, then perhaps we could, but yeah. That’s fine. We will do them both at the end. + +MM: I have concerns, but still prefer postponing. + +DLM: Okay. DMM? + +DMM: So I just wanted to say that we have optimized our engine to captureStacktraces quickly because they are used in so many tools in terms of profiling and things like that. And we are being lazy about formatting. We have effectively capturing the global stack limits on every case because we need to use that later. Having an explicit way for the user to provide it would be great. So I support this proposal in general. + +RBR: I mean, about coming back to the former question, we can go together. I don’t really see the connection between the two proposals yet. Because yes, they are—like, one thing has an impact potentially on another, a little bit. But the APIs exist and they can be completely—they provide benefits completely distinctly. And can also—I don’t know. I don’t see the real connection. But that’s okay for me + +MM: The connection is mostly that the discussion—most of the issues to be discussed in examining advancement will probably apply to both, and in any case, the advancement of one should be informed by the discussion of both, of each one + +RBR: Okay. + +RBR: Depending on what others say, I would be fine with it. + +DLM: I’m sorry. We have KM on the queue. + +KM: I sort of—I think most web and babel, web engines also don’t eagerly generate the formatted string. I will just—they will record the bare minimum needed to create the string later. And then yeah. Various optimizations on top of that, but they try to be efficient because lots and lots exceptions they get their. It’s a reason to make sure they don’t use the optimization, it’s the best kind of optimization. + +DLM: Okay. So yeah. I guess with MM’s request, if it’s okay with you RBR, what we will do is move on to your next presentation and then we will open the queue—the queue open again for that, and at the end, I will call for consensus for each proposal. + +RBR: Okay. So about this proposal, first of all, the name, it’s— + +DLM: Sorry, RBR. There’s a point of order, Philip needs a break from the note-taking. If we can have another note-taker. + +CLA: I can take it. + +DLM: Thank you. + +DLM: Sorry about that, RBR. Go ahead. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +* List +* of +* things + +## Error option framesAbove for Stage 1 + +Presenter: Ruben Bridgewater (RBR) + +* [proposal](https://github.com/BridgeAR/error-frames-above) +* [slides]() + +RBR: Okay. So like with this proposal, first of all, the name is not set in stone. It’s something completely open for debate from our perspective because it’s—the right name for this one is from my perspective a little bit more challenging than for limit, which is quite intuitive for me personally. Now, what I want to tackle here is, I would like to see this as a complete alternative to standardizing Error.captureStacktrace—no. What is it called? Error capture—yeah. `Error.captureStacktrace` by V8, which we have a proposal for. But the question is, where is this API actually used for? For what use cases? Do people use error stack trace—`Error.captureStacktrace` at the moment? That’s what I tried to think about and provide an API that is A, more intuitive than what people are having as an API right now. And B, more powerful. Reflecting more of the user's intent of what they are trying to achieve. Because what does error capture track trace do? You receive an object. In this case, it doesn’t have to be an error. For that object, we are going to recalculate the stack trace. And I say, recalculate. So if it is an error, we effectively calculate the frames twice. If it is like an object, then it’s only calculated once. And why do people use that with an object? Because they only care about the stack frames, but change the stack frames. They don’t want the stack frames, for example, like they have a helper function, which is just a validation. And they don’t want to show that because it makes the stack frames a little bit more verbose and less helpful for the user. So instead, they just, you know, remove the upper stack frames, and keep all the lower ones. + +RBR: And that’s pretty much 90% of the usage of this API and the other ones are like using this, sometimes they use this API for no good reason effectively because they don’t gain anything from using it. But what does this do as an alternative? First of all, we want to reflect on the error, similar to the options bags, to say, okay. We don’t want to have any of the stack frames that are coming from either this specific method that is referenced, or any of the above. And so they are hidden as such. Which this API is normally used for. It does that always in an efficient way because we never calculate the stack frames twice. It’s always just once. And in this case, we don’t have to calculate the upper ones at all. I don’t know if that’s possible from an engine perspective or not. Maybe there’s optimization possible. In addition, it—like the current API, it’s just cutting them off. + +RBR: So let’s imagine we have a stack trace limit of ten. Now we are coming to overlap with the other API. Where we have—I have an example for this. I am not sure. So where we—like set a limit to a low number, so let’s say 2, and now we use that API and actually the top two frames are cut away. Now, the stack frames will be gone, 0 stack frames because the hidden ones are effectively taken into count for the count. From the user perspective, that’s not great because now, in their frames that they really care about are gone. So in this proposal, I am actually suggesting starting the count from the method and not—the hidden ones are not counted. If the method would not match, nothing would be hidden and that is the same error stackstrace would handle. This addresses all the effective use cases for `Error.captureStacktrace`. I do not know any proper use cases that this would not match. As such, I am welcoming questions. + +DLM: Okay. The queue is currently empty. Here is MM. + +MM: Hi. So first of all, having constrained a discussion where I did, I just wanted to say, at the moment, I have no objection to either proposal going to Stage 1. I have no objection to both proposals going to Stage 1. Having said that, I will ask some questions—I have some questions and some points. The most important one is that the whole issue of errors and error stacks from—are very, very tricky from a security point of view and we have spent countless TG3 sessions where discussion of error stacks and error stack visibility and such has dominated the session. So that doesn’t—so bring me to TG3, it does not need to happen before Stage 1, but I would encourage you whether we go to Stage 1 today or not, to very soon bring this to TG3 and I expect that it will have extensive discussions about how it interacts with other stack security issues. + +MM: The particular one that I do want to ask about, which I think has straightforward answers for both, is the interaction with the error stacks proposal specifically, the proposal that the stack property being an accessor inherited from `Error.prototype`. Some engines already implement and others don’t. Assuming that the proposal happens, then I would expect the semantics of this to be that—those accessors, the two accessors can be independently applied to an error object, and that the stack trace that it reports will be according to these proposals. So just the independence that the accessors themselves are the embodiment of the behavior of producing rendered stacks and that that would obey this proposal and whether done through the just saying dot stack or not. And furthermore, if the—if early code replaces the accessors with something else, then again the behaviors only manifested by the original accessors and that the replacement would be the new visible behavior of some `error.stack`. That all seems natural and compatible with your proposals. I want to confirm that. + +RBR: Sorry, the last part, can you repeat that. + +MM: Yeah. This all seems compatible with your proposals, and in fact consistent with the spirit as well as the letter review of the proposals. So I just wanted to confirm with you that you—that this would be the natural way for these proposals to coexist. + +RBR: Yes. + +MM: Okay. Great. + +MM: The—so you mentioned that our friends above covers the use case for captureStacktrace. I agree. And what stage is captureStacktrace in? + +RBR: I believe Stage 2. DLM? + +DLM: Yes. That’s Stage 2. + +MM: Okay. I would like—I would like to bring up the question separately about whether we should kill captureStacktrace proposal in favour of this. I understand the web compatibility issues for captureStacktrace, but I want to have the discussion, again, the discussion does not bear on whether these two proposals should go to Stage 1. + +RBR: And that is my point. I would like to see this as an alternative to the `Error.captureStacktrace`. I did speak with DLM briefly on it. What do you say about it? Maybe you can say it. + +DLM: I am on the queue later to discuss that. We might as well keep going. + +MM: The ones that captureStacktrace—I am not sure if it’s part of the proposal or not. The V8 implementation of captureStacktrace allows it to be used on non-error objects. What does the proposal say + +RBR: This one explicitly is not for onErrors. Sorry? + +MM: I got that. This proposal was—only applied to errors because it’s applies on construct + +RBR: Yes. + +MM: You also mentioned error constructors that take on options bag. Do we have an error constructor that does not take an options bag? + +RBR: No. + +MM: Okay. Good. + +RBR: Okay. So in any case— + +KG: We don’t. Engines do. + +MM: Oh. Please expand. + +KG: I believe at least Firefox lets us put a line number there. A column number. Yeah. So the—that’s the reason we made cause and error objects is so that it was—sorry. An options bag. Or one of the reasons, so it’s distinguishable from passing numerics which engines do use. It’s not specified. But… + +MM: The line and column that Firefox implements are passed as positioned arguments + +KG: I am not sure it’s Firefox or line and column. I am 99% sure there are positional arguments and I think it’s line and column yes (speaker). + +MM: it’s tricky to specify the options bag in a way to allow prior—allow but not insist on prior numeric arguments. + +KG: We already did—the options bag already existed. + +MM: Okay. + +MM: And it’s specified in such a way to be compatible with Firefox implementing in the future as something that goes after those numeric arguments? + +KG: No. You just don’t get to use those arguments. + +MM: Okay. In any case, I don’t think this affects this proposal going forward. + +RBR: I think that’s—I think that’s everything I need to cover here. Everything else I can cover in the TG3 meeting. + +MM: So I approve. I support both of these going to Stage 1. + +DLM: Okay. Reply from CZW + +CZW: Just a quick reply to the error constructor option bag. I think Firefox was the only one that takes position argument, file line file and line number from the compatible table. So yeah. That’s it. + +RBR: Thank you for checking + +DLM: KM? + +KM: Yeah. I guess something else to say. This DOMException need to be covered here? Does DOMException include the stack? I don’t recall. It’s not a Stage 1 blocker. + +JHD: I believe DOMException inherits from error as a stack supports a cause and thus supports this stuff as well. And that would probably be the integration PR would be a Stage 2.7 requirement or 3 requirement. + +KM: Okay. Worth noting. My question or topic of do we have a—this came up in other—the previous proposal, do we know when and where engines differ in their traces? And the—here’s a reply on the previous part. But we can maybe cover that first. I don’t know if we want to do that first. + +CZW: Yeah. The DOMException right now does not take a cause option bag. There’s an open pull request to that but that was never merged into the web spec. So no DOMException does not take it and does not take an option bag. + +KM: Sounds like we were able to ship option—the cause without that, so maybe that’s fine for this one, for now at least. + +KM: Okay. Back to the first point, do we have a list where engines differ? I know, for example, we—JSC changed its behavior around AsyncFunctions where AsyncFunctions used to appear twice in your stack trace and now appear only once. A bugfix we had. But it might be worth noting those because those things might be places where there might be “compat” risks of people changing the stack traces. In this technology case, I don’t know of any "”compat” issues there and we don’t have—I haven’t heard too many complaints. It has the same theoretical problem, but it might be worth our time to figure out where engines differ in those traces just for posterity, but not anything else + +RBR: I don't believe it’s related to the proposal directly because I still think the engines should decide on the—what is the frame by themselves for now. However, I am absolutely supporting we do put a list together. It’s very, very valuable. Maybe we can just open an issue for figuring that out collaboratively. Is that okay? Does that answer the question? Is that sufficient? + +KM: Yeah. That’s totally fine. I was bringing up something to consider. Certainly not a blocker in any way. I would be happy to collate the ones for JSC. + +RBR: Yeah. Thank you for that DOMException. I was not aware of that in this case. I am not sure i—if it should be about the proposals because in this case it only applies in case we do the option bag already present. And that’s something I could make explicit. If wished for, I believe DOMException should be adjusted accordingly, but that’s outside of the scope of the proposal. + +DLM: Yeah. + +PFC: I can foresee some well-meaning developer publishing an article on Medium saying, 'if you publish an library on NPM, it’s good practice to hide your internal stack frames' and that getting accepted as a best practice. But if I compare that with my own debugging experience—I use a library and get an exception from it and I don’t understand why, I use the stack trace to go into the source code of that library in my node_modules directory and figure out where in the library the exception is thrown and what caused it. I think that if this facility exists people are going to use it even when it’s not necessary, but use it because it’s there. That will have a negative impact on people’s debugging experience. And even in an open source library, if an exception occurs, maybe it’s a bug in the library and people can go in and fix that and contribute back. But if we get this general view it’s a good practice to hide your internal library frames, I might even say this could be harmful to the open source ecosystem. I don’t think this is a blocker for Stage 1 because a lot of things still have to be worked out. But I would really ask you to consider in the following stages, how do we avoid this problem of people using this feature just because it’s there, and disrupting these debugging and open source use cases where it’s actually better to have a full stack trace. + +RBR: Actually, these proposals don’t add new functionality. All that we have is a fact of reality in which we are already operating. As such, like what they do, they provide less error prone that is aligned with the intent of the users who use these. For example, with the frames above now, well, in this case hopefully actually like the part, the less debuggability that you are speaking about is improved with this API, compared to the existing `Error.captureStacktrace` edge because that will throw away and it will like remove frames but in this case, it will actually not remove frames. It will move the frames, the overall frames to hopefully better ones that you care more about. And so that’s the difference. And `Error.captureStacktrace` is error prone is such that people are not aware of the overhead it comes with in case you use it on errors because of the duplicates stack traces. That’s why they use it on objects. They don’t have the default one and you create one which is weirdly looking. It exists today. I don’t really see any problem in that regard. Of course, we can say, you should be aware about how to use these APIs, but that is probably with most APIs. People should know how and when to use them and why. + +DLM: KM is on the queue. + +KM: I guess—yeah. This is the functionality part. For actual debugging sessions, I will not 100% sure on this, but in WebKits and spectroscope, when you debug a thrown exception or any thrown value at all, I think the engine—it will capture a stack and stack trace of where it was thrown from independently of the one that is in the error object itself on the stack. So you can still see where it was actually thrown independent of from the “.stack” property, I think. I am not 100% sure though. + +JHD: Runtime or in the devtools? + +KM: In the devtools + +JHD: What you can see of the devtools is irrelevant. It’s what the other JavaScript can see that we are talking about. Right? + +KM: I think this was more focused on the idea that, like, in your debugging session, you would not see the option. It would be gone. That mitigates it. Yes, in your reporting tools, you would not have that information, which you may want. + +DLM: Okay. PFC + +PFC: Even in the absence of `Error.captureStacktrace`, the possibility to do this exists. You can take the stack property and delete whatever you want from that string and throw a new error with it. But I mean, I think you know and I know that that is very different. Right? If there’s a convenient feature that allows you to do this, versus a big unwieldy piece of code with lots of string manipulation that is the only way you can do it. I think if this convenient feature exists, people are going to claim it’s a best practice to use it. And that’s what I am concerned about. + +DLM: Okay. I am next on the queue. I would like to find out that error captureStacktrace, or non-errors. When we censor the case, we’re censoring that is not an error. We don’t want that to show up in the stack trace because it’s not irrelevant. I don’t think it’s directly applicable as tap attaching to a property itself + +RBR: As I tried to outline the reason why it’s currently mostly used on non-errors is the cost involved on errors. Because this API, yes, I agree, the API is esoteric, not a lot of people know about it. When it is used, they mostly play around with it and figure out, the overhead of using it on errors is actually quite significant. And that’s why it is used on non-errors even though I care about getting the stack trace as a string. They want to do something afterwards with it. Or start it plainly. In this case, and like the `Error.captureStacktrace`, it’s currently used for non-errors, but it would be replaced by this API, by working on the Error constructor. + +DLM: Next in the queue? + +MM: Yeah. I think this is a—I am trying to—I am going to offer a tentative historical correction to the note of captureStacktrace. It originated in V8. Someone has better information, please correct me. But I believe the original note vacation for captureStacktrace was that before classes, there was no way to faithfully create the equivalent of a new error class that inherited the stack property. So captureStacktrace was added so that in the previous class JavaScript word, ES5, that you could more faithfully create a new constructor that acted like a subclass of error and had a stack. I don’t think it has any implications of what we are doing here, but on whether or not how captureStacktrace proceeds. But in any case, it sounds like nobody has any corrections for that, so that was probably the original motivation. + +DLM: Okay. I am next on the queue and this is about the proposal. Unlike the previous proposal, this is something SpiderMonkey, JavaScript, and V8 has shipped. It seems unlikely to unship this. It makes sense to continue with that proposal, even if we say, please don’t ever use this. There’s something better, you know, use these—the proposal we are talking about now. I don’t think we can remove it. I think it makes sense to standardize the behavior because there are some behavior differences. This is something to do because it exists on the web and we run into compatibility problems on the web. I am happy to hear any feedback on that. I did discuss this with OFR and KM before, and came to the conclusion we should standardize on the behavior even if we don’t want people to use it in the future + +RBR: I very well understand that the compatibility risk is such. And like my hope and I believe that is not too unlikely, where this API is currently being used is often by people who are very active in open source, like having knowledge of the engines. And those people are often also very active in reworking code to the new standard. + +RBR: So my hope is that when we ship this, that effectively, very soon the usage of `Error.captureStacktrace` drops significantly and because of the better API set, not the usability, but the API, but better behavior for people currently using it that we can effectively remove it. Therefore my wish would be we postpone `Error.captureStacktrace` proposal progression, ship this, see how it goes, in two years, and then decide again. + +DLM: Okay. Mark is the queue. You can go first + +MM: Yeah. So just a quick question: the existing things we put in an options bag for error constructors also end up as properties on the error instance, like `cause`. I just want to confirm that there’s no expectation that limit or framesAbove would be reflected as properties on the error object. + +RBR: They should not. + +MM: Okay. Thanks. + +RBR: I will make a note to make it explicit. + +DLM: Okay. Just to wrap up, that seems okay with me. We can see how the proposal goes. There’s no rush to standardize `Error.captureStacktrace`, at least on my side. + +DLM: The queue is empty? We should give it a minute or two in case someone else wants to jump in. And if not, I guess it makes sense we should probably do consensus for this item first, since it’s what we mostly recently discussed and then go back to the previous item, if that’s okay with you + +DLM: The queue is still empty. We should ask for consensus for Stage 1. + +MM: I already stated that I support. + +DLM: Thank you. And support for DMM on the queue. Any other voices of support or any concerns about this advancing to Stage 1? + +MM: Oh, is there a problem statement, a problem statement for Stage 1? It’s a question that applied to both proposals. + +MM: In this case, I am not—I would not insist on that. It is our known practice. + +RBR: Like in my overview, I give that pretty much. Wait. + +MM: Is it easy for you to show that? + +RBR: Yeah. So like we have this. The capability of longstanding host specific facilities. While providing a standardized cross-platform option available on all ECMAScript error subclass—Next to the easier to use API and better stack limit handling. + +MM: That satisfies me. + +DLM: JHD on the queue of support. I haven’t heard any objections. I think it’s safe to call this Stage 1. Congratulations, RBR. + +MM: Thank you. + +DLM: Okay. And then with that, we can go to the discussion option limits. Take a look at MM’s concern here as well. + +DLM: Sounds like a motivating statement to me. + +RBR: Yeah. I can read it out again. This proposal introduces the newError options property. Limit maximum number of implication stack frames. But about the problem statement, conceptually… this one. It provides the standardized cross-engine way to control stack depths per error instance and avoid manipulating a global knob that affects unrelated errors It’s a very short one. I can expand on that. + +DLM: Short is good, I think. Yeah. For this proposal, then… Let's call for consensus and ask for any people that would like to voice support or concerns. + +MM: Support + +DLM: Support from DMM. CZW. And MM I believe + +DLM: And JHD as well. + +DLM: Are there any concerns about this proposal? + +MM: Just to be explicit, no concerns going to Stage 1. I have many concerns after it’s in Stage 1 + +DLM: Any Stage 1 concerns? Thanks for correcting. Okay. Looks like we are + 1 from SFC. Congratulations on Stage 1. + +### Speaker's Summary of Key Points + +* List +* of +* things + +### Conclusion + +* List +* of +* things + +DLM: I guess, just don’t forget to enter conclusions and summary in the notes, please. And with that, I believe that is our agenda for the first day. The other topics tomorrow do not move up and we don’t have time for it, even if we could. So I guess that is that. We have never actually ended a day before. So thanks, everyone. And I guess— + +JHD: I guess… real quick. I didn’t mention it earlier. If—review the Github Teams for your employer or ECMA member and file any issues in any corrections to be made. Thank you + +DLM: Thanks, JHD. + +DLM: If there’s nothing else, then yeah, let’s call it a day. Thanks, everyone. + diff --git a/meetings/2026-01/january-21.md b/meetings/2026-01/january-21.md new file mode 100644 index 0000000..43567fd --- /dev/null +++ b/meetings/2026-01/january-21.md @@ -0,0 +1,1237 @@ +# 112th TC39 Meeting + +Day Two—21 January 2026 + +**Attendees:** + +| Name | Abbreviation | Organization | +|-------------------|--------------|--------------------| +| Waldemar Horwat | WH | Invited Expert | +| Duncan MacGregor | DMM | ServiceNow Inc | +| Keith Miller | KM | Apple Inc | +| Philip Chimento | PFC | Igalia | +| Ben Allen | BAN | Igalia | +| Nicolò Ribaudo | NRO | Igalia | +| Richard Gibson | RGN | Agoric | +| Caio Lima | CLA | Igalia | +| Ron Buckton | RBN | F5 | +| Peter Klecha | PKA | Bloomberg | +| Steve Hicks | SHS | Google | +| Istvan Sebestyen | IS | Ecma | +| Jack Works | JWK | Sujitech | +| Dmitry Makhnev | DJM | JetBrains | +| Josh Goldberg | JKG | Invited Expert | +| Olivier Flückiger | OFR | Google | +| Jonas Haukenes | JHS | Uni. of Bergen | +| Lea Verou | LVU | OpenJS | +| Aki Braun | AKI | Ecma International | +| Chris de Almeida | CDA | IBM | +| Chip Morningstar | CM | Consensys | +| Dan Minor | DLM | Mozilla | +| Jordan Harband | JHD | Socket | +| John Hax | JHX | Invited Expert | +| Justin Ridgewell | JRL | Google | +| Michael Ficarra | MF | F5 | +| Mark S. Miller | MM | Agoric | +| Ruben Bridgewater | RBR | Invited Expert | +| Ujjwal Sharma | USA | Igalia | + +## Composable value-backed accessors for Stage 1 + +Presenter: Lea Verou (LVU) + +* [proposal](https://github.com/LeaVerou/proposal-composable-accessors) +* [slides](https://projects.verou.me/proposal-composable-value-accessors/slides/) + +USA: With that, let’s move to the first topic that we have today, which is composable value backed accessors. LVU is here on the call. Are you pretty to present? + +LVU: Sure, let me just share my screen. + +USA: Awesome. + +LVU: Mm-hmm. Can you see my slides? + +USA: We do now. + +LVU: All right. Should I start or are people still getting ready to take notes and stuff? + +USA: I think it should be okay. + +LVU: All right. So let’s first talk about the problem space of what this is trying to solve. As a recap, in the November 2025 plenary, I brought this Stage 0 proposal of class fields introspection, I’m sure many of you might remember it. It was about exposing public class fields through a data structure so that classes could be introspected. The idea being that you could already introspect accessors and methods, but not fields. Even though they are actually often part 206 public API. + +LVU: There was not consensus for Stage 1 for that because it was argued that fields are internal implementation details of the constructor, which makes sense. That is how they’re implemented. And also that they—that would violate abstraction, because even though in name, they are public class fields and often used for things that are less public, such as implementing work-arounds because JavaScript doesn’t have protected and things like that. But however, in the last bit of that discussion, there was general consensus in the room that we do need a way for classes to declare the actual public data properties explicitly in a way that can be introspected. The lack of consensus was just about whether this needs to be explicit by the class author or whether any class field could be introspected. So this is trying to address that problem. + +LVU: And I’ve been working with MF and JHD on fleshing out the first class protocols proposal, which I’m now convinced is a much better solution for class composition than either mixins or most other things. However, we—we do have consensus that class fields should not be able to satisfy protocol requirements since that would expose class fields to the outside world, which based on the previous plenary we don’t want to do. And also for different reasons that protocols should not be able to provide class fields, because if they could, you need to have initializers and if it’s—and if it’s the implementing class, that leaks when instances are being created. We don’t want that. If it’s anything else, that’s a serious inconsistency—we shouldn’t have the same syntax mean different things. So we decided that they cannot satisfy or be provided by first class protocols. However, that means right now protocols can only be satisfied through accessors and methods and can only provide accessors and methods, which with the current status quo, that can be cumbersome. + +LVU: And there’s the question: what is the right primitive? If we allow classes to hear my data property that are actually part of my public API, what should that be? I don’t think we need a primitive for that. Accessors are the perfect fit and often—that is what host environments often use for their public data property, accessors that set and read a private slot. They’re—accessors already part of the class shape, they’ve already introspectable and they’re often 2/4 of a lot of public data properties anyway. Often you start with something that just reads and writes data, and then eventually needs change. Maybe you need to validate, maybe you need to transform the values before you store them, maybe you need some side effect when the values are changed. There’s all sorts of use cases where you start with a data property and you eventually end up with an accessor. All we need is to basically make—all we need is to basically make in—can you actually see my pointer? You can’t, right? + +CDA: Yes. + +LVU: Oh, you can see my pointer? + +CDA: Yes, you’re doing an imaginary circle with your pointer. + +LVU: Okay, so we basically need to make something like this have similar DX as public class field, and I know what many of you are thinking right now. Have you heard about the group the node accessors proposal? Yes, you have. I’ll get to it soon. Please bear with me. So once we start talking about how can we improve the DX of finding certain cases for common accessors, the natural expression is what else can we fix around accessors, can we fix other common use cases of accessors? And a little bit of background, like from a human factors perspective. We all know good APIs make simple things easy and complex things possible. If you put this in a chart, 2D plane of use case and complexity, you basically want a dot somewhere in the lower left, because simple things should be easy, and you want something—and you also want to point somewhere on the far right because complex things should be possible, and the lower the bet, but it could also be higher. However, there’s also a lot of use cases between these two points and how we make complex things possible matters a lot. And I would argue that most complex things are actually additive. They’re basically the simple thing plus a little bit extra something. And there’s two core paths to make complex things possible. There is a very common way that you see in APIs is there’s an API path for making simple things easy that just lets you accomplish the simplest of cases very easily, and if you want to go beyond that, you have to entirely recreate the simple things as well. You have to rebuild the entire simple thing before you can add anything to it, which introduces a usability cliff, because now a small increase in use-case complexity gets you a disproportionate increase in effort. If you use the HTML video element, you’re familiar with one element of this. There’s the control that gives a nice mobile friendly tool bar that works very well for what that tool bar includes. If to you go beyond that, let’s say add a button to it, now you have to build your own tool bar from scratch. And then there’s also the way of providing simple thing that are also extensible through progressive defaults, for example that allows functionality to be layered on top of the simple things. So incremental—adding incremental value requires only proportional effort and not recreating anything. I would say one example of that is the `Intl.DateTimeFormat` and related APIs. You can get a locale aware date very easily, and also provide a lot of with additional options to customize the output and customize it very deeply. + +LVU: So accessors use the first model right now. Even though most use cases are actually backed by a real data property one way or another, sometimes it’s a private property, sometimes it’s some other property, sometimes two properties nested in some deep object somewhere, but usually there is actually a real data property behind them. I’m struggling to think of many use cases I’ve seen that don’t have any real data property at all. However, the current accessors syntax requires all the plumbing to be rebuilt from scratch before additional functionality can be layered over it. And you could argue it’s not a lot of plumbing, but in certain domains, you have a lot of properties, like in the DOM, for example, in observe components where you’re making elements that subclass native elements, it’s not unheard of to have like 30 data properties. I can give you examples. So it adds up without conveniences. + +LVU: So one example is data validation. Suppose you have a property and you want to throw when it’s not a positive number, right? Very common thing. You go from this, that is very simple and easy and readable, to this entire thing. What you started with was all signal. Every single part of the class syntax was meaningful, there’s the class name and assignor and value. Everything is meaningful. And now you get something that is about 50% boilerplate and you have to help town for the actual meaningful—hunt down for the actual meaningful bits. A second problem statement is can we improve signal to phase ratio of these common use cases? + +LVU: And let’s get back to the grouped and auto accessors proposal. Because this is quite relevant. And in some ways it is also complementary. So the grouped and auto accessors proposal lets you turn this simple case into something like this. Which at first glance, this looks great. It’s all signal. And this is—it’s not exactly like this because the private property is not hash N, and it sugars to something like this. And let’s go back to the data validation case where we wanted to add a little bit more logic, and now the main help question get is we can use grouped access force not repeat the property name and then we can decorate the accessors as a whole, which is is very useful don’t get me wrong. But it does not address the issue of incremental values should require incremental effort. It still has that cliff. + +LVU: I would also question whether—this might be a little bikesheddy, but I would question whether accessor is the right framing as a concept. So for the simple case, when you don’t have any logic and you’re just effectively declaring a data property that actually becomes part of the class shape, I would argue that in—from the point of view of the mental model of the author writing the language, the accessors are essentially implementation details. The user intent at the moment is not I want an accessor. Accessors are a means to an end. Accessors are something that gets us our data property to become part of the class shape. But they’re not the actual intent, and usually it’s a better practice to design UIs and APIs around the intent. I also, from what I observed interfacing with authors, it seems that a big chunk of JavaScript authors are not even familiar with accessor as a term. And as—just for illustrative purposes—I asked a few days ago, I said without searching, would you be able to explain what an accessor is in JS? And about 7 out of 10 of JavaScript authors replied no. And you could argue there’s a lot of snowball sampling bias, but this also validates my own experience talking to JavaScript developers, that this is kind of an obscure term them and completely anecdotally and maybe don’t include that part in the minutes, but \[omitted]. So I think we get—we might be—our per expense might be a little skewed because of how common the term is because everybody in this group knows what an accessor is. And even for JavaScript authors who do know, the current mental model is does it run logic on reads and writes? Then I need an accessor. Does it not? I need a data property. So this kind of merges that mental model. And yes, I am also suggesting we generate an accessor behind the scenes. I’m not suggesting we introduce a new primitive. Don’t get me wrong. There’s a difference between whether in is up front and center in the syntax and whether it’s implementation question tail. For example, we did not design class syntax around functions, even though it is actually based on functions and prototypes behind the scenes. And I would suggest that in is—this should also be the case here, the syntax should be around what user is actually trying to accomplish. + +LVU: So there are two problem statements here, which I know is unusual. The first problem statement that actually motivated this was need a higher signal to noise ratio way to queen these data property that are part of the class’ public API that become part of the class shape. And JHD gave a really nice example in the thread yesterday that, for example, regular—RegExp objects have a flags property, and even though it’s a data property, essentially, you can actually introspect it on the class shape. And it does run silent validation logic as well. And the second problem statement is that additive accessor use case are so common they do deserve better DX than the general accessor syntax. + +LVU: And you might ask why should we solve them together? Why not just have two proposals, each of them solving each of these problems? I think there are good reasons to solve these problems together. First, as I mentioned, public data properties often evolve in into accessors. In version 1, you’re defining a data property, in version 2, often this becomes an accessor. So you’re going to hit this very quickly anyway. And I think that solving them together does cob constrain the solution space, which at first seems like a drawback, let’s explore both problems separately and come up with solution that are not possible if we solve them together. And I think it’s a productive constraining of the solution space, because if we do solve them separately and introduce separate pill activities that could add more clarity to the language. Also for something like this to work for the meta programming use cases I discussed in the previous meeting, it needs to be universal in the ecosystem. There is more value for the ecosystem as a whole than the incremental value from the each individual author defining their data properties that way instead of the current practice of using class fields. Like, we need to motivate them to stop using class fields for these things. How can we motivate them enough? Especially when people are defining APIs they’re often not thinking of the meta programming use cases. And I think something that lets them more easily add value down the line might be a good additional value add and it might increase motivation. This is a hypothesis, I could be wrong. + +LVU: And this problem statement is basically the core of this proposal since this is for Stage 1. So it’s all about the problem statement. There is a syntax exploration. It is important to note that any syntax at this point is meant to be illustrative. How could something like this possibly look like? It could end up being completely different. But just to see the discussion. So by looking at use cases, it basically seems that there are two core components of these use cases if you break them down conceptually. There’s a base, like, what is the actual data property that is storing the value? And in a lot of these cases, the actual data properties are irrelevant. It’s only created because you have to create today, and it doesn’t actually matter. You never access it outside the accessor. And the only reason you need an accessor today is, well, either making the property part of the class shape, in which case you could stop there, or layering a little bit of additional functionality on top, data validation, for example. Should the write even happen, and some of them—in some APIs, it—in some APIs, the rights are rejected silently. Even in JavaScript APIs, if you set RegExp flags to some silent value, it gets silently rejected. In DOM APIs this is also common. Or it could also fail loudly. There’s data normalization, transforming the value before it needs to be stored, like, for example, accepting both strings with numbers and numbers, but actually storing a number. Side effects, before, after it is—the write happens. And then there’s also another class of use cases that may or may not be out of scope. Whether the actual value is stored in an existing property, whether that’s a private property, whether that’s a property of some other object, whether it’s another public data property, and we just want to layer stuff on top of it. Like, we may want to layer data transformation on top of it, access control, have a property that can only be read but not written from the outside, for example. And if you know—and the most discerning of you may notice that these have a different shape, because I don’t think these make much sense when you have an internal property as the base. What’s the point of transforming it on reads if you can’t access the underlying data property separately? And not all of these need to be in scope. Especially the layers presented as examples of what types of common functionality we might want to be able to layer on top of it. The main thing is this separation of—we have the base that is interfacing with an actual data property and then you have these layers of functionality on top of them. And I’m sure there’s a better way to call them than layers and I’m going to call them layers for now. + +LVU: And the good thing about this type of approach is that these components are composable. They can ship independently, and every single bit of these can shape and be developed independently, possible eleven as a separate proposal. The base can sugar the regular accessor. If we were starting over, we could say that this is a case where you can have a value property that is also an accessor. That ship has sailed. So probably the most reasonable something would be for the base to desugar a regular accessor. And the layers there’s many ways to implement that, they could be descriptors and built in decorators. I think it’s common enough that it should be something built other than user-land so other code can depend on it. + +LVU: Some other interesting prior art is—this is not a terribly common concept across languages, but there is some prior art. Swift has property observers. They’re called will set and did set. Which let you run side effects before/after a property is set. And they essentially let you accomplish some of these use cases by overwriting a value, which I think is a little inelegant. But there’s some prior art. + +LVU: And then providing some syntax for the internal property, for base case of having an internal property with the data property itself doesn’t actually matter, it seems that the more—that one way to do it might be a keyword like maybe data or maybe property, and for most—for a lot of the issues with—for the first problem statement, this alone could suffice at first, for the first class protocol use cases, for class introspection use cases. And the value add is not just the elimination of the boilerplate, but also the cognitive overhead of having to name another property, picking the right convention, do we use a private property, do we use an underscore property because maybe subclasses need access to it too, do we use a symbol. Like, it just eliminates all of this cognitive overhead. And I do think, though, that whatever the actual line values stored in the case should probably be some internal slot that is not accessible or observable in any way outside the accessor. It should not matter whether it is implemented with private property or not or maybe it’s implemented with a data structure and an internal slot, or maybe it’s implemented with WeakMap somewhere. Like, that should not be observable. It should be possible to even change this down the line. Right now we don’t have private properties for objects, so the auto accessor proposal doesn’t support objects. If this becomes an implementation detail, we could support objects from the get-go. Which I think is important because often objects are used to hold future class members. There are helpers for class composition that depend on objects reading object descriptors. And it would not be great to break this. + +LVU: And we could have a similar base syntax for when the existing property is some other property, essentially a binding, it could be a property chain. And this unlocks a lot of use cases that are object delegation. Again, if any of you have written web components with element internals, there’s a lot of—like, dozens of properties that you often need to expose to the outside world that are basically just literally taking the internal element's properties and just exposing them for form associated elements, for example. Or even first class protocols. Even though we’re discussing adding sugar for this in their current form, if you want to expose the provide members of the first class protocol, you need to write glue code to say I’m going to expose this symbol as this public string property in my class and something like that could make it easier. And then for the layered functionality, again, there’s so many options of how that could be done. But a big bifurcation, a big fork seems to be do we introduce new syntax or do we have built in decorators. And yes, so far decorators have been mainly discussed as a user land thing, and it might be good to have a design principle on whether we want to host environments to provide built-in decorators, and I believe the consensus is we do. We just don’t have any yet. + +LVU: So built-in decorators could be an option. And I do anticipate that a lot of the—a lot of pushback would be but we can just use decorators for this. And indeed, that would make the scope a lot smaller. We—all we need to do is provide function, a decorator, and that’s it. Whereas any syntax change is more substantial, it slows down adoption because you need a lot more adoption, can’t polyfill it as easily, you need to transpile it. I do think there is value in a dedicated syntax. Even little things like if you use decorators for this, you have to pass the entire function in the decorator, which kind of puts the actual important bits of the property name, the property initial value, you have to put that after all of the decorators. Which is fine if the decorators are one liners. But if it’s like some more complex logic, and yes, you can abstract the complex logic away, but we know how people write code. And also one reason I like the approach of keeping them separate is that it is lossless. It means we can actually maintain the distinction of this is my setter and this is the code for, say, validation or transformation or whatever layers we decide that we actually—that are actually important enough. They could even be decorated separately in the same way get and set can be decorated separately. Whereas the only way to do it with a decorator is just wrapping the existing setter. Now you've lost a reference to your setter function. You can’t compare it with anything. Also, there is no immediately obvious imperative API, which I believe for the decorators by design. If it’s a new method definition Ko word and new scripter keys, then you just have object define property and it just works. Similarly, object literal (?) comes out of the boxes and it could compose with any member. Whereas if it’s decorated it needs to be applied to accessors and not be applied to data properties. Although, that needs some explanation, which will come in the next few slides. + +LVU: So one other potential direction is that regular data properties could be auto upgraded the first time you use one of these. And I’ve used validating in these examples. This is just an example. There a lot of—there could be a different one for side effects like finally or whatever. There could be another one for normalization. It depends on what we decide is important enough. Like, for many of these, we could decide actually this is our scope. Do it with classic accessors. And for example, we could define it so that we don’t—if you don’t have to specify that this is a property at all. If you have something like this, it is automatically upgrade, which also allows you to not define it at all if you don’t care about the initial value. + +LVU: And like I said earlier, I think the group accessors proposal is nicely complementary with an approach like this, which is again only one approach. Remember, any syntax is just illustrative. If we do go with something like this, then the group accessors proposal is fantastic for making it a single conceptual unit, eliminating repetition of the property name. Basically the same reason group accessors are valuable today when we just have get and set. So to recap the problem statement, we need classes to explicitly define what properties are part of their public API even if they don’t run logic. And I think additive accessor use cases are so common they do deserve better DX. And I think there are reasons to solve these problems together if you disagree, I guess this could be split into two proposals. And I think there are reasons to solve these problems together. So with that, can I have Stage 1? I have not looked at the queue. + +CDA: Yeah, we do have some stuff on the queue. Let’s go to RBN. + +RBN: So earlier on this in slides, you were showing—talking about using this proposal to handle validation, normalization, the slide that had all of the puzzle pieces, essentially. If you could go back to that. The composable accessor components. Yeah. + +LVU: Yes. + +RBN: So pretty much all of these are intended to be solved by decorators. One important bit is when we were first going through the decorators proposal, as it went through the various iterations and various stages, we had a lot of feedback from implementers about an earlier approach to decorators which would reify the initializer and allow you to wrap the initializer with and we’ve had feedback from pitchers about the experimental decorators an TypeScript people were causing to over write fields with a getter setter using object define property, and that these were all foot guns we wanted to avoid. So implementers had a specific requirement that there be some type of marker on a data property that would imply a syntactic transformation to an accessor so that decorators that could intercept for things like data validation, normalization, side effects for observation, notification, access control, all of these things, would go through an accessor because an accessor, what would be a normal getter/setter implies that there is logic that occurs. So one of the reasons we introduce the accessor keyword, and brought it over from the grouped and auto accessors proposal, is it explicitly defined this syntactic transformation from a normal data property, which should not have any logic associated with it at all, to something that has some type of logic associated with it. So accessor property name in a class on its own has some, but very limited, use. It just basically affects what you see when you do a define property or a getOwnPropertyDescriptors and allows you to do some things with inheritance. But on its own, it’s not entirely usable without decorators, and it was such a necessary part of decorators per implementer requirements, which was why it was oved from the group proposal to the decorators proposal. And all of the things that were described, even the swift will set, did set are achievable with decorators—and the implementation of decorators can hide the fact that it’s wrapping the set or make those things happen. So I think that accessors and decorators really do solve 90% of the use cases in this proposal. The one thing they don’t have from the description and the thing you show a little later in the slides is the use case around property forwarding, which I think is a completely separate discussion that we should have about whether or not that’s something we want to consider including. + +LVU: Could I reply? + +RBN: Yes. + +LVU: Okay. So first off, I’m not arguing for no syntactic switch, and late all potential syntaxes, there was a syntactic switch. The syntactic switch does not have to be called accessor, but it was. There even if you upgrade data properties, the syntactic switch then becomes validate, transform, whatever—whatever you have. Like, in the same way that you can create an accessor just by including the get without having to include a set, like, if these become method definition keywords, they’re essentially treated the same as get and set for that purpose. Also, I’m not quite sure, you’re saying that decorators suffice for this, but decorators are also an option for this. Like, one potential solution does involve decorators that are just provided by the host environment. It does not have to be a syntactic switch. I do personally believe there are certain advantages to having a syntactic switch, which I listed here \[shows slide 27/32]. But it is totally fair game to just provide built-in decorators as a solution. + +LVU: So—but, yeah, I think decorators are an excellent escape hatch. They do make very complex things possible, they give a lot of power to JavaScript authors that was previously only available to implementations. But they also do have drawbacks, like the fact that you have to wrap functions and you lose the initial references, for example. And all basically—basically all the points here. But it may end up that the solution to this is that we need to provide a bunch of decorators. + +RBN: I do have a separate issue on this, but I also don’t really agree with a lot of the bullet points you have here. I think that they are incorrect. + +LVU: Okay. + +RBN: I can come to that later, because there are other items on the queue. + +LVU: Okay. + +CDA: RGN. + +RGN: The point I was about to make is that compared with decorators, it seems to add too much syntax for a much narrower use. But if I’m understanding correctly, you just said that one possible trajectory of this is becoming just built-in decorators? + +LVU: Yes. + +RGN: Okay. Well, to the extent that it is about introducing syntax, I would be opposed. It’s exhausting a whole lot of budget for something that is solvable with decorators that comparatively provides more flexibility with a smaller footprint. But if this takes the trajectory of defining built in decorators, then that point is addressed. + +CDA: DLM. + +DLM: I just want to agree with RGN. I think we should be cautious about introducing new syntax, and when we do consider new syntax, I’m of the opinion that we should also be unlocking new capabilities and language rather than providing syntactic sugar. + +LVU: Can I reply to that quickly? + +CDA: Yeah. + +LVU: If we do go with the syntax path, that was an explicit goal of, that it is not just sugar. It would remain separate, as a separate descriptor, that other library, for example, could inspect what is the validation logic of this descriptor and that would be separate than its setter. I completely agree that if we do go with the syntax path, it’s completely pointless to have it just be syntax sugar. It’s basically the bifurcation of either we go with syntax and then the distinction is maintained, or we go with built-in decorators. + +RGN: I’m opposed to the syntax path. And I don’t want to increase the complexity of property descriptors. + +LVU: Okay. + +CDA: RGN, you’re next on the queue. + +RGN: To the next point, it also seems to me like an anti-goal to obscure the fact that user code runs during property gets and sets. Focusing on the syntax-oriented path, a `property` keyword seems to be misleading because what actually happens during property accesses, when user code runs, is actually rather different from what happens when it’s just a data property, and that’s important in a lot of cases, particularly for security relevant ones. And misleading authors is not something that I want to do. Accessors should be explicit in a way that highlights the fact that something different is going on. + +LVU: Can I reply? + +RGN: Please. + +LVU: So two things. That works if authors actually understand what an accessor is, which it seems that many don’t. Second, the imperative API we have to define accessors is also named around property. We don’t call it define accessor. And the third point, that that was mainly around the case where you don’t run logic. I think it confuses the mental model when you don’t have logic and you still have to go to the—go to declare the accessor. + +RGN: You broke up for me. I didn’t catch the response. + +LVU: The point I was making around accessor was in the very simple case where you don’t run additional code and you’re literally just getting and setting a data property. + +RGN: With something like the property keyword we’re looking at here? + +LVU: Yes. + +RGN: That seems like an analog. Actually, it seems identical to `accessor` proposal. + +LVU: For the very simple case yes, there’s a very small overlap that is basically that simple case. The rest of it is—the rest of it is quite different. But also as I explained, I don’t think accessor is a good—I think accessor is confusing syntax for that simple case but that could just be addressed by changing the keyword, right? And there is also other implementation details such as right now, the auto accessors proposal is implemented with a private field, which limits it to anything that can have a private field. I don’t think it should be observable from the outside what that private slot is. Like, whether it’s implemented through a private field or an internal slot, like, that should not be observable to authors. If they don’t want to access it as separate property, it should not matter how it’s implemented, and then the underlying implementation could even change down the line as new capabilities emerge. But I think it’s important to support objects from the get-go, and right now, we can’t do that if it depends on private fields. + +RGN: Do you think that supporting objects from the get-go applies more to this proposal than it does to decorators? + +LVU: Sorry, could you say that again. + +RGN: The position that it’s important to support objects that are not associated with a class, does that apply more so to this proposal than it does to decorators? + +LVU: I think ideally, objects should also support decorators, but this is not a proposal around decorators. But I think it is confusing that you can have two types of accessors: one of them works everywhere and the other one only works for classes. But, again, that can be fixed in the auto accessors proposal. + +RGN: Right. That’s my position also. Okay, as for the other points, I see there are topics for them in TCQ and I will yield to replies. + +CDA: KM has a reply. + +KM: Yes, all in the same kind of point, accessors are definitely slower outside of 2 optimizing gist and for most web pages, it’s not 2 to 3X slower, I want to say than the data property, and the optimizer, because you have to actually make a function call, and if you’re not depending optimizations is you’re not going to inline it. And I think from speedometer, I know for Safari, I don’t know if this is true for other engines and I assume it’s probably roughly the same. By lines of code, almost no code makes it to the optimizing compilers, and by time, it’s about 50/50. So, like, because, you know, I would expect this to be spread all over your code base, almost none of the code base will benefit from that optimization, the optimizations to inline the accessor, and it’s sort of like—it’s a syntax that hides the cost from the user, which is in general, I’m not usually in favor of unless there’s a strong extenuating circumstance. Like, a benefit which is unclear to me that this provides. + +LVU: I had a conversation with an implementer in the previous meeting. I forget who it was. If they are here, please speak. About whether the cost of use an accessor that just proxies a data property is significant. And they said that for at least their implementation—and I'm not sure which it was—once it’s JIT compiled, it’s basically the same. But also, once there’s a declarative primitive for this, then engines can optimize even further. And that is actually yet another reason to have a declarative primitive when you’re underlying data property is another existing property. And also accessors are used any way today if you want this. It’s just that it’s a lot more hassle. Although, you could argue that a hassle is a good thing. + +KM: I guess that's kind of what I’m arguing. The hassle makes you recognize you are doing a thing that is slower. I also talk about JIT means different things in different engines, right? So JIT in—for JavaScriptCore, I’ll give an example, there’s four tiers, there’s an interpreter, a non-optimizing JIT, an optimizing JIT, and the fully optimizing JIT. And, like, only the optimizing JITs will do any inlining, right, because that’s a speculative thing you have to guess because every call in JavaScript is a virtual call, and you have to guess, am I actually calling this thing. And you’re not going to see that at the JIT that is not optimizing, which is what I mean by, like, to be precise. + +LVU: So if—if something like that goes through, I wonder if a potential optimization might be that basically treats it as a data property. And I don’t know, we’re going in the weeds. But I’m not sure it’s the right call to compare the performance of completely ad hoc accessors today with what optimized thing we could have once we know that this is not just any accessor. It is an accessor that just writes and reads to this private thing. That enables a lot more optimization. + +KM: I think it’s easy—it’s very often in this committee that people say, well, you know, in some abstract, you can create an optimization that solves this problem and makes any and all costs go away. But it shows—you know, those optimizations, like, have complexity that is multiplicative with other complexity in your engine, and you get exponential growth in your complexity. And one of the hardest parts of working on a JavaScript compiler is this intense complexity between many different dynamic optimizations that you have to apply in order to get the best possible performance. But that cost is all hidden from the end users of JavaScript. I’m not in favor of necessarily creating lots of idioms that require lots of complexity to be performant without sufficient benefit to compensate for it, right? + +LVU: Sorry, I’m aware, but also it seems that we did have consensus that classes need to have some way to define these. And it seems that some way to define accessors might be the best path forward. So it seems to me that we’re going to have that either way, and the question is, like, we’re going to have to do some optimization around whatever we end up having anyway, right? + +KM: I’m not sure I follow. I mean, I don’t recall—I mean maybe I’m misremembering the conversation, but I don’t recall this consensus. But perhaps I’m just misremembering from the Tokyo meeting. + +LVU: Do we have consensus that classes should be able to declare properties that are actually public? I think we should probably start from that. But there is also a queue. + +CDA: So DLM was on the queue to agree with KM, end of message. But that was a little bit ago. DLM, do you want to just briefly say what specifically you were agreeing with. + +DLM: Yes, no, thank you. Yes, I think my agreement is, yes, we share the performance concerns that accessor is going to be slower than a property except at the very highest JIT tiers, and it seems unlikely that much code that’s actually using this will hit those JIT tiers. Most likely we’ll have a performance imbalance. + +CDA: There’s a reply from RBN. + +RBN: This goes to something you were saying a few minutes ago about the keyword indicator to opt into this type of syntax. We did have some discussion about keywords back when we were looking for a solution for the decorators proposal. Even though you showed that poll about the accessor keyword today, I would say that a lot of the folks who use JavaScript may not necessarily be using decorators, they may not have encountered this keyword. They may not know the internals of the specification that describe what accessors are, which was the thing before the accessor keyword. But we kind of made the decision on using accessor as the keyword because it was an accurate description that didn’t use ambiguous terms. Some of the things you showed in some of the other slides and here is something like property. Property was something we discussed with both grouped auto accessors and with the decorators proposal, but the problem with property is it’s ambiguous, especially when you look at the specification, because you have data properties and have these getter and setter accessor properties, which are both called properties. Pretty much everything that’s on an object is a property. So property is a poor term because it’s not clear. And data is in many cases unnecessary because a normal class field or a normal object literal assignment is a data property, and data properties have no logic associated with them for reasons that have already been discussed, performance costs, hidden costs of dynamic evaluation, all these things being something you don’t want with data properties. You want those to be fast and have no unnecessary indirection when they exist. We haven’t used any of those key words, and accessor is kind of the clear thing here. And also I think we’re so far along with decorators that I’d be kind of apprehensive about trying to change the accessor keyword to something else. Not that we couldn’t, but I think there’s a significant amount of—it would have to overcome a significant hurdle at this point to say it’s worth changing to show sufficient reason for changing it that I wouldn’t say it’s necessarily worth things at this particular moment. + +LVU: Can I reply? So discussing whether the word should be property or data or accessor is exactly the type of bikeshedding that I said we should try to avoid. I do have my opinions, and I shared them because I think it might be useful. It is not the right point, I think, to be discussing what the keyword should be. + +RBN: I only bring it up because you are the one introducing the discussion about whether or not the keyword should change. My point is we already have a keyword that does these things. + +LVU: But also no implementations have shipped, as far as I know, outside of TypeScript and transpilers. It has not shipped in browsers. Has it shipped in runtimes? + +RBN: I don’t believe it’s shipped. + +LVU: I don’t believe it’s shipped in Node, at least, I don’t know about other run times. + +RBN: I do know there’s active implementations that are currently ongoing, and I’m not clear on whether the accessor keyword itself has been implemented anywhere. I just know that decorators have not been fully implemented. + +CDA: Okay. So just a quick point of order, because this was asked in the Matrix chat about the time box. I mean, technically this time box has about ten minutes left, but we have the entire afternoon session open, so we can move the other—the last topics of the day down, so we’re happy to continue discussion on this topic as long as it remains productive, which so far it has been. We will go next to RBN. + +RBN: So if we could go back to that bulleted list of the comparisons between decorators and the syntax you’re proposing. One of the things mentioned here is object literal support out of the box and speculative future extension. There are two places you might consider this to be speculative future extension, and essentially, they are no more speculative than what is being proposed. One is that decorators themselves can decorate public fields. There are public field decorators. That is part of the current Stage 3 proposal for decorators. They cannot provide mutations to turn a field into a getter/setter. And that is by design. That was very explicit. Earlier designs allow you to switch this and implementers said do not do this. The second part is the accessor keyword itself, while not specified as part of the decorators proposal, because decorators focus only on class, the accessor keyword in the grouped and auto accessors proposal does support object literals, that is actually in the example in the—as soon as you open the read-me and is part of that proposal. I wouldn’t call this a speculative future extension. This is already proposed. + +LVU: Just first off, you said the syntax you’re proposing. I’m not proposing any syntax, and using decorate source is a totally valid option. + +RBN: On the right, you say it's a bad thing because it’s speculative. On the left supported out of the box. And I would say the feature you’re talking about would be supported out of the box on either side. + +LVU: If decorators ship out of browsers, they would also support, like—if auto accessors and decorators ship many in browsers, they would support out of the box my understanding that would come later because it’s less specified. + +RBN: The auto accessors proposal, separate from decorators, is intended to support object decoration. And we have a Stage 1 proposal for functions and object literals that would allow it to work. They are as relevant as out of the blocks support and feature on either side of this discussion. + +LVU: Okay. But, again, you’re basically—you’re saying this option is better than this option. Neither option—like— + +RBN: I’m stating you’re trying to illustrate it’s a negative, that it’s not supported. But I would say it is supported on both sides. + +LVU: Okay. + +CDA: There’s a reply from OFR that says agreed, this most probably has the same cost as a normal accessor. Nicolo? + +NRO: Yeah, just as a user, I would expect this to have the same costs as getters and setters, because I look at the syntax and it looks much more like a getter/setter syntax, so I wouldn’t find it surprising if it’s slower than plain properties. + +CDA: RBN, you’re on the queue next. + +RBN: So this goes to some of the other bullet points you have here. You list here readability that having auxiliary bits first is bad. I wouldn’t necessarily say it’s a pro or a con. The reason I say this is that the decorators proposal has, as part of its design, this—an implementation that is—decorators are evaluated top to bottom, so it supports function composition: composing F and G is F of G of your thing. So it is reading outside in. In that case, it is actually intentional and by design that the auxiliary bits, the things you’re going to do to augment the thing that you’re decorating do come first. So, again, I don’t necessarily think this is a negative. I think this is actually a by design feature. Especially if you were to try to marry together decorators and this feature, you would have some thing that are described above the declaration and some things below the declaration and that could cause some really messy confusion when it comes to order evaluation, which is my next thing I want to discuss. + +RBN: It's lossy, yes, I mean if they are part of the descriptor, you can access them. The fact they might seem problematic, but there’s a lot of folks in the committee that don’t want anything on this on a descriptive, just like initializers on the descriptor. If you use some type—reflection to look like an object did it get on property descriptor to get and validate that which will apply against the object, you don’t want to lose the things around it, to completely overwrite your validation logic and transformation logic. I want things to continue to go through that outside-in replacement approach so that if someone directly advocates the setter against your object, that it goes through the volume days ago logic to get to the inner point. If having them as separate things is not necessarily an improvement. + +CDA: RBN, you are also next. + +RBN: That was the last thing I wanted to mention. One of my concerns is—with something like having this be done through syntax and these be on a descriptor or additional properties on a descriptor for validation and whatnot, the problem that I would have here is that you would have to have a well defined order that says validation comes before transformation, which comes before this thing and that logic has to be very specific. The decorators because they have a well defined evaluation order which is top to bottom, if you want to have validation occur on the outside, before transformation occurs because these are the allowed inputs you can do that by specifying the order the decorators apply. With a defined syntax in that order can’t be really changed. The order has to be well defined if they will will be descriptors and then that could possibly be confusing for users that might say, using the example on the left, property N0 validate N transform N, but if somebody says writes it, transform N validate N, if they are keys on a descriptor that order is going to not matter. Whatever order, it might not matter and it could be possibly confusing for orders + +CDA: Point of order that DMM needs to step down from the notes, which means we need another volunteer for next 57 minutes. Can we get one person to help with the notes, please? + +RBR: I can help. + +CDA: RBR. Thank you so much, RBR. We’re good. Please continue + +LVU: Okay. So first off, if we do actually go with both validation convenience and a transformation convenience, which is TBD, I don’t think anybody would possibly argue that transformation would happen first. Like, you—it’s normal to say this is: do I want the value or not? I accept, what do I store now? But also, decorators have an explicit evaluation order because they basically push that—a lot of that into the user. Right? Which is both a good thing and a bad thing, depending on the case. But also, it seems a lot of the discussion and a lot of the arguments from RBN are basically around this table. And it makes me wish I—maybe I should not have included the table because like I said, I think both of them are valid options. I did include the table because I had opinions, but it doesn’t mean that built-in decorators aren't an acceptable solution to the problem statement. I would somewhat lean towards syntax because I also see the argument that syntax is a much bigger scope. And I think that is a big drawback. Even though that is the problem with pros and cons tables. You have the one line here, that makes it look like you have one downside and five upsides. You could argue these are not upsides but even if we did accept that they are upsides, however, I think this is a fairly substantial downside. As was already argued. Even though to me personally I would—I would lean towards that despite the tradeoff, it’s not that providing decorators is not an acceptable solution and it makes me wish that maybe the slide should have been like this. We have these two options. And which one we go with is TBD. Because that is essentially the point we are at right now. We are not discussing the—should we do this or that or what the keywords should be. It’s basically do we want to solve these problems? Do we think the problems are worth solving and any gap in the language related to the problems or all resolved by existing proposes? + +CDA: NRO? + +NRO: Yeah. So there is—there is a separate proposal called decorator Metadata, also Stage 3 and the goal is to allow decorators to add Metadata to the individual properties of the class that you can clear from the outside. So the— + +LVU: The descriptors? Sorry. + +NRO: No. There is like an array of an object to contain Metadata. I don’t remember exactly the shape. But you can clear the meta data from the user. The solution could also work for problem 1, other than problem 2 because we have the decorators that define the Metadata and the property access for this is just the data property access, but there’s this separate place. It’s a well known symbol in the class that tells the metadata that the class has. + +CDA: PFC? + +PFC: I would like to express a lot of sympathy about the first problem statement here. This is something that I’ve run into repeatedly in one of my side projects trying to get JavaScript objects to play well with other object oriented paradigms. It’s quite surprising for people who are used to prototype inheritance that the class fields are instance only properties. I support problem statement 1 very enthusiastically. And in that capacity, I would support Stage 1 for this proposal. I am less convinced by problem statement 2, or that it’s necessary to solve them together. But, you know, good developer experience is a worthy goal. I have to say, I would prefer that these problems are solved with the tools that we have, including proposals on the table such as accessors and decorators. With supporting Stage I don’t want to endorse any syntax solution or even any particular solution because Stage 1 is just not the time for that. But I would support Stage 1. + +CDA: RBN? + +RBN: So in general, I don’t support Stage 1. Mostly because most of the things that are being discussed in the problem statement and throughout the slides in the proposal repo are covered by existing proposals. That said, I do think there are two parts of the proposal that might be worth further discussion. First is the—it was very briefly discussed or shown which was this concept of property forwarding. While I do think that’s an interesting take, I am not sure that that is something we would actually want. There’s a lot of complexity that makes it almost impossible to be realistic around things like defining how we are accessing these things, how does evaluation work? Are we trying to reify a reference? Like, for example, and I don’t think that—I am not 100% certain I support that for Stage 1. But that said, I think we might want to have some further discussion about that. The second thing that I think is interesting is this concept of built-in decorators. We have had discussions about many, many times in plenary, we have folks that looked at, for a while, decorators as the way to say, and if we have decorators and the more we expand decorators,the less new keywords to introduce in JavaScript because we can take some of the new things as \[ne] key word capabilities for a class, , ObjectLiteral or FunctionExpressions and say we might be able to do this as a decorator, even if his a built-in decorator rather than as a key word, perfect example was early discussions we introduced decorates around `AsyncFunction` that we could have had an at async decorator that turned a generator into async function which is has of early transformations worked. There’s a lot of potential for built-in decorators. We didn’t put them into the main proposal. Because they are such a large capability that we wanted to get that in first. There’s already a lot of user-land decorators that do things we think cover a lot of use cases. But there’s some things we know they can’t. User-land decorators based on the current Stage 3 proposal can’t do things like modify enumerable, writable, configurable. And you still couldn’t do that with the Stage 1 and experimental decorators used in other languages. This is something we could have built-in that can be valuable. There is a discussion about built-in decorators that should come. So whether it’s taking that bit of this proposal and that is the bit that advances to Stage 1 or breaking this up into two proposals. One for the potential for forwarding which I don’t think is something that we might advance in discussions with others. There’s been a lot of concern that this is not something that will be viable going forward. But having—a discussion in built-in decorators that definitely would be for Stage 1. Although what that is, is up for debate. + +LVU: Can I reply? + +CDA: Yeah. + +RBN: Please do. + +LVU: My understanding is that it’s not totally normal to either broaden or narrow the problem statement once it’s—once something is Stage 1. I am not sure how the problem statement would even change. If we were to scope down to aliases and built in decorators because those are—the explicit solution is not part of the problem statement. But it does seem pretty reasonable to go with the decorator path. It seems there is strong consensus that we want to solve this with decorators rather than syntax. I think that is totally in scope. And as I mentioned at the beginning, this is an initial exploration. It is totally expected that it might be broken down into separate proposals down the line. So it, I mean I am new to the process of this committee. But it seems to me that that is totally within scope for Stage 1. Although, another thing, though, unrelated a bit orthogonal, you mentioned decorators cannot change numerable and configurable. I wonder if they should. Like rather than introducing new things, like I wonder there should be—if that has been discussed. Is that a bug in decorators? Or is it intentional they don’t do that? + +RBN: It’s an effect of earlier requirements from implementers, they are not working with descriptors. + +LVU: Yeah. And so it seems to me that based on the discussion I have heard so far, a path forward might be we explore what improvements might make sense for the auto accessors proposal for simple cases and then this could become an exploration for aliases and built-in decorators to cover the use cases which is a good thing because if you are not introducing new syntax ask it’s basically built-in decorators it means you can make more liberal decisions about what is covered. Like, I had this slide here with all of these. I did not expect that all of these would result in new syntax. But if we are providing built-in decorators, they totally could. So that’s another advantage of going with the decorator route. It can be more fluid, more rich. It could have more capabilities. + +RBN: Let me restate my concern. My statement is that I don’t support Stage 1 as is. I think there’s too much in this proposal that will not advance. I do think if you wanted to bring some of the discussion you had around value backed accessors to the grouped and auto accessors proposal, that is reasonable. Alias accessors deserve more discussion. But not Stage 1. We don’t know what that means based on the limit bit in the slides. So I would not currently support to advancing it to Stage 1 is the only thing I consider worth advancing to Stage 1 is a discussion of what the types—what these things you are looking at composable setters as potential built-in decorates if that’s something that might be reasonable and what it the scope that have might be, I think that might be reasonable to discuss for Stage 1. But I don’t think the rest of the proposal is something that I would support advancing at this time. + +CDA: There’s a reply. There was a reply. It’s gone now. So I guess there’s not a reply. SHS is next + +SHS: I guess I would support the particular second problem statement for Stage 1. This is because it goes a different direction from the existing decorators and accessors proposal. It doesn’t deal with the question of any builtin accessors. There is value, particularly because—decorators because that would be quite useful for tooling. TypeScript would handle it in a known way and the way if it was user land decorators they wouldn’t handle. That’s probably where I would see this going as Stage 1, again the problem statement of having the additive well known additive ways to modify accessors has value. And I support Stage 1 for it. + +CDA: CM? + +CM: Yeah. So I am kind of uncomfortable raising this because I was the one who blocked forward motion last time. When you put up your slide about no bikeshedding, I think that was exactly right. And the whole discussion people got into about accessors versus decorators, the whole discussion about the runtime cost of accessors—those are legitimate things to be concerned about later down the road. But at this stage, I think focussing on the problem statement is in fact the correct thing to do. Number 3 in the problem statement, I think, looking at the stuff you are considering here holistically, I think is right. But I just don’t buy the proposition in statement number 1. This looks like something that unpacks into a tremendous amount of complexity, and I don’t really buy the problem that it’s trying to solve. I am open to being persuaded, but this feels like a marginal problem for which a big solution is proposed. And I need to be sold the problem statement first and, at this point, I am unconvinced. + +CDA: WH? + +WH: From a Stage 1 perspective, I’m unclear as to what the problem statement here is that we’re not already exploring. It seems to me, this is asking to explore a problem area which we are already exploring. So I am not sure what to do here. + +CDA: KM? + +KM: Yeah. I guess I somewhat second what other people have said a little bit. The second use case I am not totally convinced of. The first one, I mean, I am open to exploring that. There are other things exploring this as well. It would be good to work with those. I am not—I am neutral on the topic, but I guess I am maybe—mildly negative neutral. Like I don’t—but certainly not enough to block any kind of proposal on it. And yeah. But mostly seconding unconvinced on the second problem. + +CDA: RBR? + +RBR: Yeah. So with the proposal, I believe it is going to be quite a complex implementation to achieve all that, maybe. I may not say that actually. I am not an implementor. I don’t see that we are solving a problem or a wide range of developers that—where they really need something like that. Where we have a huge gain for adding this to the language. They are capable of achieving the same thing with different notation at the moment. And when they need something like that, I believe it’s good. It’s an explicit way of doing it. And if there would be a huge need for it, then we have a huge crowd who asked for improvements in that area and I don’t see that at all. So I don’t believe the second part is as such correct. And the first one, I don’t see a justification for that either, like the downsides of having to have that in the language, where we already need to have other parts fast, et cetera, and the work that would have to be involved in maintaining it, I don’t believe that is a good justification for adding this or doing this. + +CDA: Reply from SHS. + +SHS: Yeah. I think I mean we have already seen built-in decorators can basically solve these problems. And I guess I disagree with RBR’s assertion that there’s a lot of complexity here. That is a low complexity solution to this. It’s still worth exploring the problem space. + +CDA: Philip? + +PFC: I disagree with the assertion that a huge crowd needs to be clamoring for this solution. We make changes to the language all the time that satisfy niche cases, but they have good reasons. And I am also not entirely convinced that you can do everything now in the language as it is, that this proposal would cover. I need to look into it, but I may be able to help provide some examples of things that you just can’t do with introspection and the real world case with the GNOME stuff I was talking about earlier. I can’t provide it on the spot now, but I may be able to help in the run up to another plenary. + +CDA: All right. MF with +1 to SHS's comments about complexity and PFC's about demand + +LVU: I was surprised to hear the—I agree that we do solve the problems all the time. But I was surprised to hear it being a niche problem because like it’s the—is the question—is this push back basically what percentage of accessor use cases that are additive ? Accessors as a whole are niche? Class data properties… What classes having public data properties is niche? I was wondering what part is not—where you don’t see the high demand? Like, where—which part of it is niched if that makes sense. I didn’t express it very well. + +PFC: Are you asking me or RBR? + +LVU: No. RBR. Sorry. + +RBR: So in this case, I don’t believe that, like, we can express anything really new, and as long as you—like, the way I can write the code is totally sufficient in this case, to achieve the goal. Where do we need—like, I don’t understand where we need to write it differently? Where do we write a different program when we have that in the future? Where—like, I don’t see that. + +LVU: Is this an argument about—like, capabilities versus the X improvements because we add the X improvements all the time. And—not every new feature is new syntax + +RBR: I sometimes question syntax. I question \[inaudible] a lot. + +LVU: And the point about complexity was already addressed, yes? + +RBR: About the complexity: the decorates are in fact not shipped. + +CDA: SHS? + +SHS: Yeah. To RBR’s point, I think where I see an actual real benefit here is that tooling benefits from having a well known solution. I think this was—first point about the problem statements anyway in terms of if there’s a well known way to declare the fields and that could be a benefit from here, the fact of, you know, you have a well known way to modify an accessor to make it a validator or whatever else, the tooling types could handle this so change the types to treat to appropriately for transformers, all of that benefit inside a way you don’t see if you just use the existing syntax. So I think that is why I think this is not a niche thing. It was actually quite useful to have a well known solution that everyone will use and be on board with. + +CDA: KM? + +KM: Whenever we talk about how tooling would benefit, there was a proposal that—this committee didn’t like but it. Didn’t particularly—you know, endorse which was proposed, the JS0 sugar. Engines don’t spore that tooling can all agree upon and transpile down to the JS0 execution layer of JavaScript. And that seems like it would solve the tooling problem without necessarily impacting the execution complexity. + +CDA: Sorry. Did you mean to go on? Did we lose KM? + +KM: Sorry. That was the end of my… + +CDA: Okay. Sorry. It sounded like you might have had another thought. Reply from SHS. + +SHS: Yeah. I think you're supposing the problem statement guarantees or requires that the solution complexity is impacted. There are solutions that don’t, you know, significantly change the complexity if we stay away from key words and stick with decorators, I think it’s quite reasonable. And so I still saying this is a problem space worth exploring. + +KM: Sure. I think the—maybe I misunderstood what you are saying before. I don’t think that just because something benefits tooling, is a great argument for—obviously, if the solution does not involve execution impact, that’s different in some sense. There’s plenty of proposals that come in and tooling could benefit from a standardized solution and the standardized solutions have implementation complexity that—implementers are concerned about. I guess I would ask the committee, if that were the case, to reconsider a JSSugar, JS0 world. + +LVU: My understanding—sorry. + +SHS: So JS0, JSSugar are worth continuing to discuss in my opinion. My point about the tooling was more just in response to RBR kind of saying this is a niche thing. I am saying that the tooling benefit is I think a good supporting argument, maybe not an only argument. + +LVU: Yeah. The main value add was DX and having a standard solution to reach for without having to implement it in every project. My understanding was that tooling has a standard solution and also generally benefits tooling. Because you know what you are dealing with. It wasn’t the only value was tooling. + +CDA: All right. I am next on the queue. Just want to reply to some of the comments about there’s already proposals that are Stage 1 or beyond that are exploring this problem space. Different proposals can explore the same problem space and overlap, some overlap very much. In some cases I've seen three different proposals for the same thing. Others may overlap less. I think it’s good feedback. There are existing proposals. And we saw a recent example with first class protocols with LVU getting involved in that. Just because we are exploring the problem space already elsewhere doesn’t mean we should block advancement to Stage 1. I don’t think that’s a good justification. + +CDA: That is it for the queue. + +CDA: So I guess the question is, LVU, given all of the feedback you have received would you still like to ask for Stage 1 formally? + +LVU: I think that—in some ways, that depends on what exactly the process allows, and what it doesn’t. It seems very clear to me that the committee generally does not see the rationale for changing accessors. So it seems that is probably better solved in the auto accessor proposal. There’s some interest in exploring maybe aliases or what built-in decorators could be additive use cases. And I think that is totally fine. Like you said, that was my understanding as well, it’s totally normal to be in the same place. Some get splits and parts joined with others. It’s normal for Stage 1, I thought. But again, I am new to the process. Like other committees are very different. But as long as it’s okay to narrow down some of the solutions space, and, like, move part of it maybe—part of the work to collaborating on another proposal, like as long as this kind of flexibility is acceptable, yes, I would. And it seems to me that there is—there was—a few people said they would support Stage 1. And some people say they couldn’t support Stage 1, but they would not actively object to it. + +CDA: Right. Yeah. I agree. And we should be very clear about what the committee is signalling when they say things. And make sure we are sure unambiguity in this MF is on the queue, please just clearly state a revised problem statement before asking for Stage 1. + +LVU: So it seems to me that the main opposition was not about the problem statement itself. Some people said they were unconvinced about parts of it. Some people were unconvinced by 1 and 2 and not 1. Most of the push back was about whether we should introduce new syntax for Stage 2. Which is totally fine. And that a lot of 1 can be handled by the auto accessors proposal, which is also totally fine. So MF, if you have any—are there any particular changes you would like to see to the problem statement so it resonates with you? I am totally open to revising it. I am just not sure how any of the opposition is actually about anything in the problem statement. It seems to me it was mainly about the potential solutions + +MF: I have no issue with the problem statement. I am not asking for a revision. I am saying, if there is revision, before we ask for Stage 1, that it’s clearly stated. If it’s anything different from what's shown on the screen. + +CDA: If you want, we do have another topic. If you want to, you know, maybe give it some time and think about it, we have the lunch break and come back. And we come back to it and we can either, the problem statement could be as it is right now or thought of something to make a slight revision or whatever, that could be a problem statement and we could call for consensus at that point. But it’s totally up to you. We could do it now as well, based on what you have here. + +LVU: Would revising the proposed solutions count as revising the problem statement. That is the mainThing that needs revising— + +JHD: The process doesn’t give consensus for solutions at this point. If the solutions need revising, that is a pre-Stage 2 thing. Not a pre-Stage 1 thing. + +LVU: That’s what I thought. I mean, I was planning to not include any solutions in the slides. But I looked at the process checklist again and it seemed like a general shape or something. It had something about the general shape of the solution. Okay. I have to include something. And then we ended up focusing on that for the discussion. + +CDA: I will jump back to the queue. First is KM. + +KM: I think having—maybe I am misunderstanding the process. But I think having these two problem statements as part of the same proposal sort of presupposes that you are solving both problem statements in the same proposal. And I think that is in some sense some of the hookup because that ties the solution space down. If they were separate proposals we decided to—I guess I don’t know the right process for this. But, like, if they were separate statements of separate proposals at some point we decided to combine into one, it might be different. Because the problem statements part of the statement, the solution—the solution space we will look through includes solutions to both problems at the same time + +LVU: It doesn’t mean with the same solution. It just means solving them—being aware of what solutions would be introduced for each of them and basically saying, and seeing it holistically. Solving both at one of the results might be, maybe we introduce a primitive that addresses one and a primitive that addresses 2 and another primitive that addresses 1 and 2. We don’t need to find the primitive that addresses 1 and 2 together. I mean, they kind of do, and they kind of don’t. Like, they do through human factors effectively. Not through technology. In the way that the more you improve accessors, the more likely people use them and the more likely to use them to define the class properties that way. It’s a human factors argument. It doesn’t mean it has to be the same solution for both. I think they need to be solved—we need to solve them closely together and have awareness of both. Does that make sense? + +CDA: Yeah. You are saying, to think of this holistic at least when exploring the area which I think is fair. I was on the queue just—I wanted to say that we have plenty of precedent for proposals that are larger, getting split off into multiple or sometimes proposals being consolidated together. So there’s plenty of precedent for that. SHS’s comment in the queue is also to the same effect. And then there is a reply from RBN. + +RBN: Yeah. I'd also like to agree, it feels like we’re talking about discrete concerns. Yes. While it makes sense to solve it together, it is kind of built into the process that we should be considering cross-cutting concerns between proposals. How they work together. We have a number of proposals where we have done this type of split. And for example the extractor proposal was split off from the pattern matching proposal since it applies to more than just pattern matching. That said, as I said when I was speaking earlier, I think it feels like in one case,—one of the things you are wanting to address is the ability to compose setters. And it feels like when you look at value backed accessors, you can make the composable setters work in certain scenarios that for a case like value backed accessors—. And I don’t think that if you had one proposal that was, say alias, I would advance that because I am not convinced of that. You can break this down into composable setters and having to simplify that saying that you want to have some simplified ways of doing some of these mechanics is for composition of capabilities for setters as an independent proposal, I can see at that for Stage 1. And figuring out what that problem space looks like, and what solution space makes sense independently. I am not convinced about alias accessors. We don’t have enough to say for Stage 1. There’s not enough in the slides to really be comfortable advancing that I could see value accessor not advance. it’s 100% advanced by the auto accessors. Alias should be split off and possibly discussed as a separate Stage 1 proposal. With more focus on what that does, to see whether that should advance. I would not support all three advancing. The combination of these three things advancing as a problem statement, but just that specific compositional piece as a problem statement for a Stage 1 proposal, I think I would be perfectly fine with that advancing. + +LVU: Is part of the process that can resolve the proposal goes to Stage 1, as long as splitting the two components into separate proposals? Or is it about splitting it and then we discuss whether it goes to Stage 1? I am totally fine to split it into these two proposals. And move the value backed part into auto accessors. Possibly even before the afternoon session. But at some point there was someone in the queue saying, let’s see if there’s any objection first. I would be wary of splitting only to find it’s still—they still want to go to Stage 1. + +RBN: If I can interject. The easy way would be to ask for Stage 1 for specific things. Like discreet things. And whatever advances to Stage 1 we can— + +LVU: That makes sense + +RBN: Taking this proposal, repo and trimming it down or splitting off separate proposals just for that makes sense + +CDA: Let me jump in here. Yes, the answer to the question is, yes. There is precedent for this. We have had proposals come in, I think of one recently from JSH that was here are all the things in here. And people liked some and didn’t like others. But it was very clear—the most important thing is to be clear. You brought this proposal here. Folks don’t like some of the things in it but are willing to advance to Stage 1 these particular things. As long as that was clear and quickly updated and created repositories for that, we have advanced on that basis. But it needs to be unambiguous. + +LVU: As long as we resolve, this part goes to Stage 1 and this part doesn’t. These need to be split. Like, then it doesn’t get moved. Right? + +CDA: Yes. Yeah. Exactly. All about being clear and unambiguous about what we are doing. There is a—chip is on the queue + +CM: Yeah. I mean, I wanted to say, I am not sure that this is a thing by rephrasing the problem statement. My issue was that I am not convinced that the problem is real enough to warrant our spending time on and that’s really what Stage 1 says. It says the committee has decided that we—the committee—wants to spend resources exploring this problem. And I haven’t been convinced of that. And it is possible that reframing the statement itself will help make it more convincing. But my problem is more than just—I am not buying into the problem, whether it’s part 1 or part 2 or together, or separate? I am less concerned about that. I am more concerned about the fact that I just don't see the problem. And I am open to persuasion. But I don’t see it yet + +LVU: Would you object to it? + +CM: Yes. I feel like we spent an hour and 45 minutes of a one hour timebox and I think we already spent too much time on this. If the arguments are reframed in a different way, the arguments in favour were stronger, and I think other people have different perceptions about the need here, and so I find that help. . But I am—as I say, I don’t see it yet. And until there’s a there, there, I feel like—this feels like a—it has a flavor of the solution looking for a problem and I am not seeing the problem. As I say, I am open to being convinced, but I am not convinced yet. + +LVU: So if I understand it right, your argument is that the current status quo of accessors and their DX is totally fine? + +CM: Pretty much. I mean, I hate accessors anyway. And that may be a bias that I am bringing to the table. But then we get into the whole discussion about philosophy of software engineering and I don’t think that’s helpful. But I am just not seeing the problem. And maybe you just need to make the problem more visible in some way. But it all feels very abstract and theoretical to me and adding a bunch of complexity—and I am not talking about implementation complexity. I am talking about the burden on the user’s mental model. And I don’t see a pay off here. Sorry. That’s just how I am reacting to this. + +LVU: I mean, if you generally dislike accessors, that does sound like a bias + +CM: Yeah. But as I say, you can make an argument for this, but I don’t feel like I have heard it yet. + +LVU: Do you think this adds too much additional complexity even if it is solved through decorators? + +CM: I think the decorator approach has the virtue of being something that if you are not interested in it, you can just ignore it. I think the idea of a standard set of decorators that are built-in itself, completely divorced from the accessor question, is a really interesting idea. But as I say, you are trying to avoid getting into the—you know, the decorators or not decorators question. But I like that direction better. But mostly, it’s because it gives you a place to stand to screen out the complexity that is not relevant to one’s own world. And I am concerned about the user model that gets more complicated. I think decorators have enough complexity that they already have some issues in that regard, but they do a nice job of separating the complexity on the inside from the complexity on the outside area. The idea is to hide the complexity on the inside and I like that direction of things. But I don’t want to argue for design here. + +LVU: So you said this complicates the user’s mental model. If this is using decorators, how does it complicate the user’s mental model? It is piggybacking on the model + +CM: That’s why I like the decorator model better. + +LVU: How do you feel about the auto accessor and grouped accessor proposal? If you don’t think there’s something to solve in the DX of accessors, wouldn’t that apply to that as well? Or any other accessor proposal + +CM: Perhaps. Perhaps. But those already have a big community of people who are well down the road of trying to figure stuff out. And it might have been at the beginning if I had been—you know, a part of that conversation, you know, I might well have had raised an objection. But I don’t—you know, I don’t—I don’t see—as I say, I keep coming back to the same question. I don’t buy the problem statement yet. I am just saying, sell me the problem. And I don’t think we are going to do it in the 10 minutes we have left today. + +CDA: So I put myself and remove myself from the queue about this. Others have expressed interest in the problem statement and do think that the committee should spend time on it. So because of that, I find it a little bit awkward, I don’t mean to put you on the spot, CM, but I apologize, to say those people doesn’t spend time it because you are unconvinced + +CM: Okay. That is a fair point. And I guess the fact that we have extended this conversation beyond the timebox is evidence that there is interest. But there is the time of the particular committee members who are interested in the problem, which is with most proposals it’s the champion group, the champions and whoever else is interested in engaging with whatever the proposal is, and then there is the time of the committee as a whole in the plenary; and it’s the latter that I am concerned about. If people are interested in a specific subproblem, and we have many such subgroups in our community of folks working on this, you know, they are certainly more than welcome to dig however deeply they feel motivated to dig into whatever the particular problem is that they are interested in solving. I am concerned about the attention and time of the committee as a whole. In the plenary. And if this was, you know— + +CDA: I get it, but, like, very few people are really interested in every single thing that comes up on the agenda— + +CM: That is true. + +CDA: So you know, we could make that argument about anything. But that doesn’t mean we should not advance a proposal to Stage 1 because I don’t think we should spend time on it. I don’t want to cite an example because I don’t want to make people feel like they can’t bring a topic to plenary, but that argument is difficult to defend. + +CM: I am just very nervous about piling on complexity and so, I would like to see things be well-motivated and I am just not there on this yet. + +CDA: I understand. All right. JHD? + +JHD: Yeah. So I was going to say, historically, it’s near impossible to block Stage 1 because it’ got—this is something we will never be interested in talking about again. It’s perfectly fine to say, this will never advance beyond Stage 1 until I am convinced of the problem statement. Or until a solution that doesn’t involve X or Y or Z is found or whatever. We have many Stage 1 proposals that are effectively jailed there. But the Stage 1 signal is that if the committee is willing to continue talking about it, if new things are manifested to talk about, so I think that the—like, the precedent for our process is that this should, I think, it should get to Stage 1. And the feedback should be taken and recorded that there are multiple delegates that, A, don’t want syntax. B, unconvinced of some or all of the problem statements. And all of those need to be addressed for any potential advancement to Stage 2. And it’s also worth noting chip’s feedback that it would not be a good use of committee time until some of those things have been resolved outside of plenary. I think that’s perfectly, reasonable and we have done that many times + +CDA: I agree with JHD’s comments + +CDA: There’s nothing else on the queue. We only have a couple of minutes left before the lunch break. I suggest that we pause for now and take the break. People can have lunch and my suggestion to LVU is maybe if you want to, make some clarifications in your problem statement, or not, or potentially what—splitting things up would look like. Whatever the shape of this direction looks fruitful to you, and then folks can have a little soak time and think about this during the break. Then we can come back and we can ask for consensus and see how it goes. + +LVU: Sounds good. Should we do that immediately after lunch or should we go to the Stage 3 proposal review issue since it is already an hour past its time slot and I feel bad pushing it further. Maybe we can do that after lunch and come back to composable accessors + +CDA: I think that’s fine. Is PKA right here now? Peter, does that sound good to you + +PKA: Yeah. Either one is fine with me. + +CDA: Okay. Sure. That sounds like we will do that. All right. We will see everybody back here in about an hour and 2 minutes. Thank you + +LVU: Thank you. + +### Speaker's Summary of Key Points + +* Followup to class fields introspection from the Nov 2025 meeting: if arbitrary fields should not be introspectable, how can classes opt-in to declaratively expressing their public data properties in a way that is? +* Related problem statement: Improve DX for common “additive” accessor patterns (validation, normalization, etc.). +* Many delegates argued that grouped and auto-accessors cover the first problem statement and did not see the issue with the current `accessor` framing of basic auto-accessors. +* Strong resistance to new syntax; preference for exploring built-in decorators instead. +* Some concerns about performance, mental model clarity (don’t hide user code on get/set), and descriptor complexity. +* Broad agreement that scope may need to be split or narrowed before advancement. + +### Conclusion + +Conversation continued later in the day, see below for conclusion. + +## Stage 3 Proposal Review (Stage 2/2.7 time permitting) + +Presenter: Peter Klecha (PKA) + +* [proposal](https://github.com/tc39/proposals#stage-3) + +PKA: Thank you, MF and JHS for taking notes. So I’m PKA, from Bloomberg, this is TC39 Stage 3, and there’s only one eligible 2.7 proposal, so we’ll do 2.7 for sure review, and if time permits, we’ll also do Stage 2 review. So the goals here are to hear some updates about each proposal every so often. Ideally, for proposals that, you know, where we need to identify next steps proposals, we can do that, like encouraging implementation work, identifying issues to fix in the proposal, adding champions. This is totally committee driven. I don’t have anything to present about these proposals. I’m going to talk about proposals and I would like to hear champions, if they’re present or other interested parties if they’re not, to give updates if they can about the proposal. If the update is, you know, “this proposal is active, work is ongoing”, that’s totally fine. One sentence is great. We can move right on. In the past, though, we have unblocked proposals, we have identified new champions for proposals. Sometimes people get a little nervous about this. We do sometimes—this process sometimes does result in proposals being withdrawn, but this is not an adversarial process. Nobody’s proposal is going to get withdrawn without their enthusiastic consent. So please don’t worry about that if you are a proposal champion. + +PKA: Okay, so some Stage 3 proposals we have heard from recently and, therefore, don’t need to discuss, because the committee has heard from these recently, Temporal, Intl Era/Month Code, Decorators, Explicit Resource Management, import defer, non-accessible applies to private, joint iteration, immutable `ArrayBuffers`, as well as those that already advanced to Stage 4. + +PKA: And also Stage 2.7 proposals we have heard from recently, Include ShadowRealm, iterator chunking, import bytes, await dictionary, and iterator join. So let’s just dive in. + +### Legacy RegExp features in JavaScript + +PKA: Our first proposal is an old, legacy RegExp features in JavaScript, champions listed are Mark Miller and Claude Pache. This proposal has not been presented since the last time it was discussed in one of these reviews in July of ‘23 and last actually presented, I think in May 12017, although that information might be out of date. What was said at the time was: + +“Implementors have lost interest in implementing this proposal, and possibly the champion group has as well. CM was asked to reach out to champions' group.” + +So if we have CM, I’m wondering if— + +CM: This was in July of 2023. So I certainly don’t remember any of this. I suspect that at the time, what that was is I’ve been working regularly with MM, and I was just—he wasn’t present, and so I was to call this to his attention. I probably did, but I don’t remember. I think MM might be here. And you can ask him. + +PKA: MM, do we have MM? + +CDA: MM is here. + +MM: I’m here. Can you hear me? + +(multiple): Yes. + +MM: Okay, yeah, so I thought I saw something from one of the browser makers, maybe Firefox, about trying to phase out these legacy—these are the weird static things that get updated every time you do a RegExp match. They are certainly—it’s always recommended against using these, and simply—and whether you use them or not, the fact that they’re there means that every RegExp match is slower in engines than it could be because every RegExp match has to record the results of the match in such a way that you can fetch the results of the latest match from the static properties. If—so what stage did this get to? + +PKA: Stage 3. + +MM: Stage 3. Yeah, I don’t want to withdraw this. I would like to see what can be done to phase these things out. But I mean, frankly, it’s not a priority for me either since, you know, the hardened JavaScript shim, the SES shim is already able to do this by replacing the constructor. But I imagine that the speed improvement available if this is phased out would be of interest to implementers of high speed engines. + +PKA: So I know having reviewed the notes from the previous review before, that there was pretty strong signal from the implementers that they were not interested in going forward with this. I wonder if any implementers are currently present who can either recall those sentiments or maybe just have a new take on this that they’d like to offer + +DLM: I’m on the queue with that. So, yes, we weren’t that interested in this originally, but we had a volunteer contribute and implement for this as part of the AVI coding program, so that is something that has not landed on, and we haven’t shipped it yet, but I imagine we will ship it eventually. + +MM: Oh, so that’s great. And what you’re—what you implemented and might ship eventually corresponds to this proposal? + +DLM: Yes, yeah. Yeah, I didn’t review it, so I don’t know if we’re able to do 100% of what was in the proposal or not. I expect we were, but— + +MM: Okay. And I imagine the main reason for resistance was compat risk. If Firefox succeeds at shipping this, that establishes that the cross-browser web does not depend on this to the degree that we should consider ourselves stuck. So, yeah, given that news from Firefox, I’m definitely keeping this on the table. + +PKA: Okay, great news. + +CDA: MM, you’re on the queue again. Did you have another question + +MM: No, no, that was it. + +CDA: Okay, OFR. + +OFR: Yeah, so I think I added a counter for this just recently, but now I can’t find it. But so we might have data about whether this is still used. Still looking for it. + +MM: So in the absence of seeing a counter, do you have any memory of what your impression was? + +OFR: No, I don’t at all. + +MM: Okay. Presumably if the counter is high, that would cause Firefox not to ship it as well. + +CDA: All right. That’s it for the queue. + +PKA: Thank you, guys. I think that suffices for an update. Thanks a lot, Mark, and others. + +### Dynamic Code Brand Checks (pt. 1) + +PKA: So moving on, we have dynamic code brand checks, we haven’t heard of this since April ‘24. Champions include NRO, MSL and KOT. Any updates from the champions? + +MM: Could one of the champions remind us of what this is, just a brief summary. + +CDA: So Nicolo was here, and I’m not sure if NRO's here right now or available. + +MM: Okay, could somebody who knows— + +CDA: NRO is on mute. But he might be AWOL—AFK, not AWOL. + +MM: Does anybody know enough to give a brief summary? + +CDA: I’m quickly pulling it up. + +PKA: NRO says he can speak in 30 seconds. + +CDA: The TLDR is allow host to create code-like objects and ensure host can compile strings to host ensure compile strings. That’s another overload, I guess. + +MM: I got it. Right, right. Right. + +CDA: Motivation is: “eval is evil”. + +MM: Yeah, this one has interesting security properties, and it actually does enable some new security patterns. I’ll wait to hear from NRO. + +MM: Should we go ahead with DLM while we wait for NRO? + +DLM: And we shipped this in Firefox 135 as part of our trusted types implementation, and I thought all browsers had now enabled trusted types, so I think this might actually be ready for stage 4. I’m interested to hear from the champions. + +PKA: We can circle back to this. Nicolo needs a little more time. But that’s a nice update from DLM. + +### Atomics.pause + +PKA: The champion of this most recently was SYG. I’m not actually sure of SYG's status in the committee. It was last presented in October 2024. Does anybody know about the status of this? + +CDA: Do we have anyone from the Google delegation here right now? + +MM: OFR is here. + +SHS: As am I. + +OFR: I don’t have—I don’t know what the status is. + +KM: I don’t know if we—it could also be that Apple, we have taken over championship of this. Let me run by with SYG. But I don’t think this is dead. + +KM: It’s shipped everywhere and basically needs to be brought back for Stage 4, which sounds like something we could do. + +DLM: Yeah. It’s everywhere except in some of the server run times, but, yeah, all the major browsers have it. + +PKA: Great. So I look forward to seeing Stage 4 advancement on the agenda for next meeting for this. That’s awesome. + +### Source Phase Imports & ESM Phase Imports + +PKA: Next we have Stage 3 source phase imports and also Stage 2.7 ESM phase imports. Both of these are championed by guy Bedford and Luca. I imagine that the status of these is that they are very much active and a part of the sort of bottleneck of import and module related proposals, but if Guy or Luca has a comment they’d like to add. + +CDA: Neither are here. + +PKA: Or if anybody from the committee would like to comment. + +MM: Is KKL here? + +CDA: No. + +MM: I’ll—so there is, as PKA mentioned, this larger Module Harmony effort, and this—these were definitely consistent perks of that, all of the issues with regard to the larger Module Harmony effort had been worked out and looked good. So, yeah, I would keep both of these. + +PKA: Yeah, definitely. Just, again, to reiterate, there’s no world in which we’re withdrawing proposals that are at this stage without input from champions, unless they’re really old. + +DLM: Quick update, so we’ve started work on source phase imports. And we’re planning an ESM phase imports for later this year, source-based imports is important for us, and we’re also looking at WASM integration. + +PKA: Awesome. Great to hear. Okay, so that’s—those are all of our Stage 3 and 2.7 proposals that we haven’t heard from recently. It sounds like they’re largely all hurtling forward at excellent speed, even though we just haven’t seen that in the committee. Seems like we have plenty of time to talk about Stage 2 proposals, which might not quite have the same status. It’s good for us to talk about them. + +PKA: Proposals that we have heard from recently, we heard about deferred re-exports. We heard about + +### function.sent + +PKA: I’m wondering if we needed to circle back to that at this meeting, because I think it received a conditional—and the conditions weren’t met explicitly, were unmet. + +MM: We just need to ask JHX. + +CDA: I can provide the update there, because this happened in the delegates Matrix channel, where JHX has been chatting today and yet quite a bit. JHX does not want to withdraw the proposal. I don’t recall the other comments made about it. Oh, and also that the information about when it was last discussed was out of date. It had been discussed at some point in plenary before I think whatever JHD had said. But—so, yeah, the TLDR is it’s not being withdrawn, and I think you can reset the days since we’ve heard from the proposal back to zero on that one. + +PKA: Okay. I do see—I think I see JHX in the meeting. + +CDA: Oh, JHX here today? Yeah. JHX, if you want to chime in, feel free to, if you are there. Unmute yourself, but otherwise… + +JHX: (in chat) I don't have a mic. + +CDA: Yeah. Okay, well, feel free to—if you have any comments about `function.sent` that you didn’t already make in the delegates channel yesterday, feel free to add them here or there. + +PKA: So then, yeah, then we’ve got a number of other proposals we have heard from. This—I should say last year, 2025. So let’s now see some proposals that we have heard from less recently in 2024. So first there’s + +### Iterator.range + +PKA: This is a proposal to add a range helper method to the standard library. Is JWK here to comment on it? + +CDA: No. + +PKA: I thought I saw him earlier. That’s too bad. Does anybody from the committee want to make a comment? Okay. + +MF: There has been some recent discussion in the issue tracker. I think JWK plans to continue with this pretty soon. + +PKA: Okay, that’s great. + +DLM: Just to chime in quickly on `Iterator.range`. We had an intern looking at this last year, so we have a, you know, semi-complete implementation in SpiderMonkey just waiting for pike issues to be ironed out. So that will be something that we would be interested in seeing advance soon, if JWK has the time to work on it. + +PKA: Awesome. That’s great. + +### Discard (void) binding & Extractors & Structs & throw expressions + +PKA: We have discard (void) binding; so this is like if you want to—I think it’s most useful in a using case where you want to using something, but you don’t actually want to make a binding, so you can put “void” instead of another binding name. Can also be generalized to other case like function parameters. RBN, you are able to give a comment? + +RBN: Yeah, so this and the other proposals of mine that are on this list, I just haven’t had time to look at last year with the job change. I still plan to work on and advance discard bindings and I can also speak to structs. I know SYG hasn’t had time to be involved in with that proposal. I haven’t in the last few months. There were a number of discussion going on for structs last year, and the work for things like atomics pause and the work for—that MM was pushing for the ability to prevent mutation—sorry, preventing installing private fields on using the super constructor return hack that—all those things were actually very strongly related to the structs proposal, and with SYG not being part of it, I have not had time to get back to it. And I plan to get back to it this year, and I need someone from implementors to be involved. And a lot of the forward momentum was strongly the result of SYG’s direct involvement as a member of the V8 development team, so we would need someone there to help on that side as well to continue that. And then extractors, I still plan to work and advance that. We’re having some discussions in the header matching champions group that still needs to—we need to reschedule our meeting, because we now have some meeting conflicts and get everybody back up on that, because proposals are very strongly tied together. Throw expressions is something I need to get back to. The last time it left off was a discussion about whether it could be an `error.throw` method, and I have concerns and I haven’t had the time in the last year to write down my concerns and come back to it. + +PKA: Okay, great. Thank you for all those updates. + +CDA: There’s a question from MM. + +MM: Yeah, I have a question for RBN. So as the structs discussion proceeded, the idea of shared structs versus non-shared structs, and the non-shared structs have a much easier—would have a much easier time of advancing, are they separate proposals or is structs still one proposal? + +RBN: Structs is still one proposal. We might need to talk offline, because I’m not 100% clear on what you meant by non-shared having an easier time advancing. Only because from my perspective, the shared version is the more important version of the proposals. But this is—I think we should have this discussion more offline as well to figure out where that’s at. + +MM: Okay. + +CDA: That’s it for the queue. + +PKA: Great. Thanks. RBN, is it fair to say that you are a champion of structs? + +RBN: I was I believe considered a co-champion for the structs proposal for the majority of it, so I’m most likely currently the sole champion and looking for a co-champion. + +PKA: Are any implementers ready to just throw their hat right in the ring right now? Probably not, but just thought I’d ask. Okay, we’ll let implementers, especially maybe the V8 team, meditate on that. We also have—I should also say, are there any questions from the committee for RBN about any of his proposals? + +CDA: You’re including extractors and throw expressions? + +PKA: Yes. + +CDA: I do have questions from RBN about the RegExp. + +PKA: I think we’re going to make it to there. + +CDA: Okay. All right, I’ll wait. I’ll be patient. + +PKA: Yeah, okay. So then I’ll turn to + +### Propagate active ScriptOrModule with JobCallback Record + +PKA: from CZW. I don’t see him at his desk behind me. Okay, so that’s a proposal relating to web compatibility with something about promises. I’m forgetting off the top of my head how that works. Does anybody on the committee have a thought about in proposal? + +CDA: I’m struggling to recall what that proposal is. + +PKA: I can—I think I have it here somewhere. Yeah, here it is. \[reading from proposal repo] “avoid revealing internal slot `[[PromiseState]]` with `Promise.then` … to the promise handlers by host hook requirement.” + +NRO: Yes, I do have opinions here. + +PKA: Oh, yeah. + +NRO: The proposal \[INAUDIBLE] currently and we say something and then HTML violates what we say, so we should still keep that discussion open. + +PKA: Got it. Excellent. Thank you, NRO. + +NRO: And I mean, it’s low priority for everybody, but it would be great for our spec at some point to solve it. + +PKA: Great. Our last proposal on this page is the + +### isTemplateObject + +PKA: I think JHD is the sole remaining champion. JHD, are you able to comment? + +CDA: JHD, we can’t hear you. You are muted. + +JHD: Sorry about that. Contraction I couldn’t unmute on the other device. So I was going—I joined this proposal to help DE with it. DE is apparently no longer—or not currently involved with TC39. The—I’m still—if there’s interest in this proposal, I’d really like a co-champion to help me with it. I don’t particularly care about trusted types, which I understand is the motivation for it, but I am interested in doing—doing things in the language so that web browsers don’t have to do terrible things on the web in order to achieve their goals. And I want them to be able to achieve their goals without doing terrible things. So that’s sort of where I’m at, is like I’m happy to help, but I don’t want to drive this myself. Is there anyone who is interested in seeing this advance? + +CDA: I’m interested in seeing it advance. I don’t know that I’m interested in co-championing. + +JHD: Fair. It’s good to have that on the record, too, if there’s folks seeing it advance, but, you know, don’t want to be volun-told to be a champion. + +CDA: Nothing on the queue, so we’re going to take that as not yet. Maybe somebody who’s not present today would be interested. + +### Dynamic Code Brand Checks (pt. 2) + +PKA: I notice now that we do have NRO. I’m wondering if we can cycle back to— + +CDA: Nicolo added a comment in delegate chat. + +NRO: I can say it out loud to get it on the notes. This proposal is trusted types. It’s the changes we need to enable trusted types on the web. Trusted types are, I think, almost implemented in all browsers. But I—and I believe it’s working, and he said that it’s not ready for Stage 4 yet. Probably it’s not unflagged in all browsers yet or something like that. + +PKA: Great. And then I think we also—was that the only one? Okay, maybe that was the only one. Okay, great. Okay, so now moving to some further Stage 2 proposes presented a little longer ago. We have + +### Module Declarations & Module Expressions + +PKA: which again I think I can say are, like, fully active and just in the module space bottlenecking a little bit. I don’t know, NRO, if you want to add an additional comment there? + +NRO: Yes, still interested in those, but they are kind of on pause for me as to when the current module proposals I’m working on are at a later stage. + +PKA: Any questions or comments from the committee about those two proposals? + +CDA: Nothing in the queue. + +PKA: Next we have + +### `JSON.parse` immutable. + +PKA: I think I can summarize this one as well. This is sort of a sister proposal to records and tuples, which was withdrawn, but has been replaced by composites, that ACE is working on, so I imagine this proposal is just kind of in a holding pattern waiting on composites as well. I don’t think ACE’s here. NRO, do you have an additional comment on this? + +NRO: No. To be honest, I forgot working on this proposal. Maybe ACE has more, as he is the one working on composites. + +PKA: Cool. Any questions—I mean, I’m not sure we have anybody to answer questions, but any comments from the committee? + +MF: I actually think this shouldn’t be blocked on composites. The goal of Composites is different. Records and tuples was an immutable data structure for holding data that you want to get at later. And composites is about creating a structure that can be compared to another similar structure, which, you know, that’s—like, unlikely to be coming from JSON data. So at this point, I don’t think that those proposals are related, even though composites superseded records and tuples. + +PKA: I’ll ask ACE to weigh in on this again. Next we have + +### Symbol predicates + +PKA: JHD, I believe the last proposal review mentioned needing to think through the sort of—the argument or the—come up with a convincing use case. JHD, do you have a new comment on this? + +JHD: Sorry, I just stepped out. What was the question? + +PKA: Symbol predicates. + +JHD: Oh, I’m highly interested in advancing—continuing to advance this proposal. The main sticking point was that there’s—if I recall, because I had to page it back in, there’s two predicates in the proposal, and one of them nobody has a problem with, and the other one, SYG, who at the time was representing V8, expressed that there was—and I think he wasn’t the only implementer, just the voice of this concern, but expressed, like, a desire for more compelling motivations and use cases for the other one of the predicates, and while I could certainly split them up, it felt like it would be a better package if I could come back with a better argument and move the two together. So that’s—as soon as I can, I’m going to try and come back with either better arguments or splitting up the proposals. But I would still like both predicates to proceed. + +PKA: Great. Any questions or comments for JHD? + +CDA: Nothing on the queue. + +PKA: Great. And last in this section is + +### String.dedent + +PKA: This is a proposal to—for standard library method to dedent text to sort of remove the common tab content from a string that has—that presumably consists of code for the purpose of being able to better—better DX, represent code in string form in JavaScript. This was discussed at the last proposal review: + +JRL: About String.dedent: It was championed by PayPal, who is is no longer a member. There are no current blockers, I just have not written the test 262 test to get this 2.7. + +DE: You don't need test 262 tests for stage 2.7, you need that for stage 3. So let’s propose that for 2.7. + +JRL: I can do that for next meeting. + +PKA: That didn’t happen. I’m just wondering, basically, I don’t think I’ve heard JRL say he’s no longer a champion of this proposal, and I’m just wondering if anybody is interested maybe in—since we don’t have Justin here, I don’t think. Is anybody interested in maybe joining Justin as a champion on this proposal? It sounds like it’s a sort of advanceable? + +CDA: There a reply from LVU. Oh, LVU says, excuse me, strong support. This is also super useful for syntax highlighters, and then there is Nicolo on the queue. + +NRO: In discussions much more recent than 2024, I think summary we talked about a blocker being how TypeScript transpiles template literals. That has some performance effect on this proposal. It’s not dropping, and we’re supporting the output, and maybe that blocker doesn’t exist and that can advance. And maybe somebody else can come from this, and just with discussion with other people a few months ago. + +CDA: That’s it for the queue. + +PKA: Thanks for the update, NRO. Is anybody interested in helping out with this proposal? If so, reach out to JRL. + +PKA: Moving on, we have our eldest proposals in Stage 2. The first one here is + +### Dynamic Import Host Adjustment + +PKA: whose champion is KOT. And NRO, I think you had an update on this one. + +NRO: Yeah, this was the precursor of dynamic branch checks proposal, I believe. I think this was inactive for ten years at this point, or you say 2022. But the people working this proposal were the same people working on the checks proposal that’s now implemented everywhere, so we can probably withdraw this one. + +PKA: Great. KOT isn’t here, and I’m sure we don’t have—we can’t withdraw proposal without the champion present. Do you know, NRO, if that’s his view as well? + +NRO: I do not even know who this person is. + +SHS: I can reach out to him. He’s at Google. + +PKA: Okay, great. Thank you. So thank you, SHS. So, yeah, so hopefully if we can get KOT's confirmation on that, then we can withdraw this proposal. Next we have + +### RegExp buffer boundaries + +RBN: Yeah, I plan to come back to this one as well, as some of my other RegExp features. But I spent more time trying to get the modifiers proposal through to Stage 4, and then a bunch of priorities changed, but this and a few of my other RegExp proposals I’m still interested in spending some time on. Hopefully I’ll have some time to do that this year. + +PKA: Great. Glad to hear it. Any questions or comments for RBN from the committee? + +CDA: Yeah, I’m on the queue. Please land this as soon as possible. I know, you know, you do so many things and everything, but, like, we really need this. The JavaScript ecosystem continues to be plagued by crappy regular expressions that cause security issues, and this is a security feature that would really help us out, like, big time. So, you know, I obviously can’t tell you what to do, but, yes, this would be really great to land sooner rather than later. And also, if you are welcoming any help with advancing it, you know, we can also ask the committee if there’s any folks interested in helping out as well, because I know that you went to have time do all the things. + +RBN: I am curious when you talk about worrying about security vulnerabilities, I’m assuming all the various RegExp CVEs, if buffer boundaries is a higher priority than something like atomic operators, which tends to be a better solution for a fairly large chunk of CVEs that I’ve seen. + +CDA: Yeah. I mean— + +RBN: I mean, I can see both. + +CDA: I haven’t put any thought into that, but I don’t assume they’re mutually exclusive. I guess what—sorry, what—you said atomic operators. This is related to what proposal? + +RBN: It’s a Stage 1 proposal right now. The RegExp atomic operators proposal, it’s something that is supported in most other RegExp engines. It allows you to put a trailing—I believe it’s a trailing question mark—not trailing question mark. I’ll have to go back to look at the proposal. It’s a syntax that allows you to specify an operator is atomic, which means it either matches or fails, but don’t backtrack. And it’s— + +CDA: Okay. + +RBN: Many—sorry, many regular expressions CVEs are related to a RegExp that spends significant chunk of time scanning something and then scanning end number of—N number of spaces and hitting the end and failing to match the end trigger and then it goes back and it advances and tries again and atomic operators allow you to prevent that. It’s—the proposal repo has an example of the CVE, I believe, the last time it was presented. So, yeah, I plan to look at both of those as well as X mode and some other things. + +CDA: Yeah, right. No, definitely, you know, the catastrophic backtracking, the non-linear regular expressions are the category. I will take a look at this proposal. I think this might be a good subject for us to talk about in TG3 as well. And if there’s a clear, you know, let’s land this one before that one, if that’s helpful feedback to get, we can try and answer that question. But I haven’t given that much thought because I have not really looked at this other proposal. But I will do so. That’s it for the queue. + +PKA: Great. Next we have + +### Destructure Private Fields + +PKA: At the last proposal review, JRL indicated he is no longer championing this proposal. It was kind of blocked on DE had a concern about this proposal potentially infringing on the syntax for a possible object literal private field proposal. And DE was interested in bringing that to committee in order to show that they are not incompatible. Of course, DE is no longer on the committee. So I would just say if anybody’s interested in stepping forward to championing this, certainly it’s not requirement that you abide by, you know, DE’s interests there. Is anybody interested in stepping forward for—championing this proposal? \[long pause] + +PKA: Is anybody kind of—does anybody kind of like this proposal, even if they’re not interested in championing it? \[long pause] + +PKA: Should we withdraw this proposal? + +CDA: There is nobody on the queue. + +MM: I would prefer to see this withdrawn. This is Mark Miller. + +PKA: We have—and also on the queue, JWK. + +CDA: Pardon? + +PKA: I saw on the queue JWK saying “I like it” and then it disappeared. JWK maybe with doesn’t have audio. + +CDA: Got you. Disappeared from the queue. I’ve not looked at this proposal in a while. Oh, you said it’s without champion. + +PKA: That’s correct. + +CDA: But the proposal has JRL's name on it. + +PKA: It does. Proposal repos are often out of date, unfortunately. At the last proposal review, JRL said he is not working on it. + +CDA: I see. Okay, well— + +PKA: This is an open question for the committee. + +CDA: If Mark wants to put it on the agenda for next plenary to withdraw, that could be next step there. + +MM: I don’t care that much. And given that JWK likes it, I’m not going to push. + +CDA: Oh, okay. + +PKA: Okay. Hopefully we’ll find a way to move forward with this proposal in some way or another. Next up we have + +### Pipeline Operator (pt. 1) + +PKA: I have some champions listed here. I don’t have high faith in being the correct list of names. On the repo itself it says list incomplete. So I’m not sure what to do with that information. Is anybody here able to speak to this proposal? RBN maybe? + +RBN: So yes and no. As far as where the proposal stands right now, I’d have to get more discussion from TAB. I haven’t been directly involved with the proposal in a bit, and the last place I believe that we left off or what’s been causing the longest delay has been around dealing with a topic token. The reason I don’t really want to speak to the states of the proposal is since it got to Stage 2 with the current design, I’ve been more of a conscientious objector because I still don’t agree with the use of Hack style pipes. And I’m—and mostly I’m involved as a co-champion right now to talk about my concerns and continue to make sure that we’re at least going something on the right path. So I can’t really speak to much more on the proposal beyond that. We’d have to get TAB or JSC or someone involved to also speak to their side of things. + +JHD: I can add some color, if that helps. Essentially, there’s a few people that don’t like what RBN talked about, the pipe style. There’s a few people that have been historically unconvinced that this use case is worth syntax. And that is a battle that would have to be fought to advance farther. Additionally, the specific choice not of the operator token necessarily, but of the placeholder token, is a bikeshed that has yet to be painted. And there’s a lot of folks who also want the ability to name it, and there’s other folks who oppose the ability to name it and so on. So it’s—it’s still, I think, for me, it’s a very important use case. I really hope it advances. I actually even Although I prefer the current style, I don’t care which style. But, like, there’s a lot of obstacles, I think, before it could advance. + +PKA: JHD, when you mentioned those disagreements, are you saying within the champion’s group? + +RBN: Possibly, but with—certainly within the committee. + +PKA: Okay, plume practice agreement in the committee opportunity prevent it from being presented in the committee, and a disagreement in the champion’s group doesn’t— + +JHD: Technically that’s true, but strategically it’s unwise to bring a presentation and propose something when you already in plenary heard it’s not acceptable, until you side channeled with all those people and come to some sort of detente that makes it worth spending more plenary time. If they came back and did an update, it would probably be rehashing the same arguments from last time, and there’s not really much value in that. + +PKA: I guess what I'm saying is if there’s a disagreement in the champion group about how to proceed, wouldn’t it be useful to come get feedback from the committee and maybe if there’s an option that’s not acceptable to the committee. + +JHD: Certainly, and from my outside understanding, I think there are no—I’m not aware of any disagreements within the champion group that do not also exist in the wider plenary group. + +RBN: Yes, that is—that would be a correct statement. One of the main concerns right now in the proposal as it stands, as I mentioned, the use of the topic token or place holder token. That’s something that’s also been discussed in plenary. There has not been a solution to this. My secondary concern is that I still believe that the F# style pipes are better than Hack style as they do not have things we’re discussing being issued. Never a whole different set of issues. And we haven’t been able to make much forward progress on the hack style version of this proposal due to the topic token being a major concern and due to just there really hasn’t been much discussion between champions in quite a while on this. A lot of it’s been discussions in the issue tracker on what topic to use, and no consensus yet. And there’s no advancement because there hasn’t been consensus within the group moving forward. + +PKA: Do you know who the champions are? + +RBN: As far as I know, myself, TAB, and JSC. Tab commented in the Matrix chat back in October something, but we don’t have a final solution yet. + +PKA: Got it. Okay. + +CDA: Question from the queue. Is the champion group even meeting lately? + +RBN: No, we have not had a meeting in quite a while. + +CDA: All right. That’s it for the queue. + +PKA: Okay. I think it would be worth thinking about what the committee should do in this situation, where proposal seems kind of irrevocably stuck, and I’m not saying it should be withdrawn. And there’s a lot of people interested in proposal who really want it, and it seems unfortunate that it seems quite, quite stuck. + +JHD: There’s also not much of a cost in just letting it sit there. Like, unless—if there’s still people—in other words, if the signal that we want to send to the wider world is that this is—this is still something that people are interested in that has champions, then I think sitting at whatever stage it’s at is the correct signal. If the champions—if the issue is that no one wants to give up the ghost and no one wants to do anything to advance it, then that would be a different story. I’m not sure if that applies here or not. + +CDA: Just a quick—RBN’s on the queue with a point of order that he needs to step away for a few minutes. RBN, would you prefer we pause on this topic and come back to it when you return? + +RBN: Yeah. Hopefully I won’t be too long. I apologize. + +CDA: Let me capture the queue as it stands and move on to the next, and we will return when Ron is back. + +PKA: Next up we have + +### Function implementation hiding + +PKA: the last update we heard about this from MF was that it was blocked on TG3. MF, you are able to comment? + +MF: Yeah. So I was not—I did not prepare anything on this, but my recollection is that this was stalled for kind of two reasons. One is that in order to actually specify something like this, we need the spec to actually have a representation of the stack frames that are to be elided, which was being done as infrastructure as part of the error prototype stack getter proposal. So it was kind of waiting for that. But there was also somewhat of a pushback from the committee when it was last presented as well, and I don’t recall the exact nature of that feedback, but I do remember it wasn’t 100% of everyone was on board and this was only waiting for that infrastructure. So it may also need some convincing from, you know, for some people who had issues with it. But that’s my best recollection, off the top of my head. + +JHD: And I put myself on the queue for talking about stacks. So just I’ve already telegraphed this, I think, but just to be clear again, as soon as I finish these email integration for the stack accessors, then I can bring it back and ask for the next—first stage advancement. I’m hoping to do that at the next meeting. I have to find time between now and then. Assuming that it does advance, then the next thing I plan to do is some back with the broader track proposal which specifies structure, but not contents. Which will not require any browser to do anything to change their track traces. That’s by design. And see if the current makeup of the committee is willing to advance that without me having to boil the ocean of specifying the contents. I suspect that that proposal will unblock that aspect of Function implementation hiding and potentially a number of other error related proposal. As before, if anyone, for the broader proposal, if anyone plans on blocking, that please tell me in advance so I can talk to you and figure out if it’s even worth the time bringing it back. Thank you. + +CDA: Yes, MM, go ahead. + +MM: Yeah, so RBR’s proposals with limit and, I forget what the other one was called. + +JDH: Frames above. + +MM: Thank you. Have the same elision problems with what are they aligning and not specifying anything about the content. to consider it for RBR, and I think we should be willing to apply the same philosophy when we’re considering it here, and I also agree with JHD that once the stack accessor proceeds, then it’s a natural time to try to create more spec machinery in order for both RBR’s proposal and this one to proceed. + +CDA: TG3 would welcome continued discussion on it. + +CDA: MF + +MF: Yeah. This was actually something I was thinking about earlier in this meeting. You know, we had the proposals from RBR that were affecting which frames show up, and I was assuming it would be similarly blocked, as this proposal is, that it couldn’t actually describe which frames to omit without that infrastructure being there. I use the term blocked as this cannot make progress, not blocked as somebody is opposed to it. + +CDA: That’s it for the queue + +PKA: Great. The last proposal for the day besides Pipeline is + +### collection normalization + +PKA: this was championed by BFS, who is not in the committee anymore as I understand it. This is a championless proposal. + +MM: Can somebody give a brief summary? I don’t remember this at all. + +PKA: Here, I can try to bring up the… + +JHD: I can. I am in the car so I hope it comes through. Essentially, it’s the like hooks that let you alter the way that values are checked for presence. And even though it’s currently championless and I am not at the moment stepping up, I would very much like to not withdraw this proposal and I would like to—it’s something I am considering championing in the near future because I very much want to see it happen. + +MM: Okay. Good. Thank you. And yes, I have positive feelings for this as well. + +PKA: Okay. Great. Hopefully— + +SHS: Created by composites? + +PKA: We don’t have ACE here, so—I do remember there being—this mentioned in the context of that + +JHD: There’s conceptual overlap, but the strength, the composites, it’s unlikely to subsume it and both can coexist. + +MM: I would rather one proceed than both. They are too close to each other + +JHD: That’s also fine, if that’s how it plays out + +MM: Yeah. I don’t have a strong opinion about which one should win, but I don’t want two mechanisms. + +PKA: Thanks for that feedback. Hopefully, we will gain clarity as composites moves forward or we get a champion for collection normalization. + +CDA: Let’s return to the + +### Pipeline Operator (pt. 2) + +CDA: The queue was SHS, was on the queue. Is there anything you can do it help kick-start this discussion again + +PKA: I wanted to respond to JHD. I don’t know if I'm supposed to put myself on the queue as the presenter. + +CDA: No. That’s fine. Sorry. + +PKA: Okay. Yeah. So I just wanted to say JHD said, something to the effect of there’s no cost to letting this sit at Stage 2 and I wanted to say the cost would be that people really want to see this feature and the question is, like, is the current situation causing us to not get a feature that we might otherwise get? Maybe that’s— + +JHD: If a proposal is stuck, then—then that proposal is withdrawn, people are likely to dry to bring a new proposal and then we have to rehash the same discussions and reasons why it is stuck and it won’t make progress either. So I am not sure—like, it seems better to me to, like, conceptually keep the issue open rather than close and deal with 50 duplicates + +PKA: I am not suggesting closing it. I think your comment was in response to me sort of musing if there is something the committee can do? + +JHD: No. No. I’m sorry. That comment was about there’s not a cost to leaving it active. I am in no way—I am very much on board with bringing things up for review like you are doing right now. That’s always great. + +PKA: Sure. Sure. + +CDA: So yeah. Then we have SHS asking about anything we can do to kick starlet discussions. That’s directed to RBN. + +RBN: Right now, I think the big thing will be trying to get a new meeting scheduled and try to get whoever is still planning to be involved as the champion on board. We have been having similar issues with the pattern matching proposal where we need to start rescheduling meetings because a lot of folks have—haven’t been able to make and have conflicts now, so we need to work that out. But just getting people together to try to figure out where things stand would be helpful. It’s getting the folks in the room. I still have some issues with the direction of the proposal. Those are more, I guess, for discussion off-line. So we will have to see where that can go. + +SHS: Is there any interest in any new people joining the discussion, any champions or just participants? We certainly have some interest in seeing this go forward as well. + +RBN: I certainly think it would be useful to have more participants and help start to kick-start things. It’s also a matter of—you know when this went to Stage 2, JSC was a member of TC39 and then he left—he was no longer a member for quite a while. He recently became a member again. But during that period, there just wasn’t much motion or traction. So I think we are again trying to get people back in the room to have that discussion. And I know I haven’t also been very—haven’t been as focused on this because of changing roles and organizations within the last year. So a lot of this has been back and forth trying to spend more time on this. + +CDA: All right. Then we have DLM on the queue. + +DLM: I wanted to say that past—this is unlikely to be something we would support in the future. I mean, obviously it depends on what is presented to the committee when it’s coming back. I am cautious about spending time on this. I am not confident this would go for advanced + +RBN: Would you be open to an off-line discussion about what those are. We have talked about some of them in the past but it’s been a few years. Maybe they align with some of my concerns and there might be a way forward + +DLM: I think Jonas was not part of TC39 and SpiderMonkey the last time it was discussed. We are skeptical about adding syntax in the terms of decreasing the capability. So I think that is the main concern that is heard. Also, the fact there’s a bit of a split in the champion group itself is an indication to us that no one is going to be completely happy with this, no matter which way it goes + +RBN: Yeah, open. But I would have to dig into the context a bit. This was not—I was not part of the discussions if the previous time this came up. + +SHS: Yeah. I think I can speak to some of that. So please loop me in + +RBN: If you haven’t, there is a pipeline champion’s room on matrix in the TC39 group order, whatever it’s called. Matrix, that we can cut in as well. + +CDA: Yeah. It is the space, the TC39 space. + +RBN: Space. Yeah. I don’t use matrix often enough to use the vernacular + +CDA: And that is time for this topic. Thank you. I think this was very productive. Got some good updates on stuff. And yeah. Appreciate your time. + +### Speaker's Summary of Key Points + +Stage 3/2.7: + +Proposals The Committee Has Heard From Recently: + +* **3** Temporal +* **3** Intl Era/Month Code +* **3** Decorators +* **3** Explicit Resource Management +* **3** Import defer +* **3** Non-extensible applies to private +* **3** Joint Iteration +* **3** Immutable `ArrayBuffers` +* **2.7** `ShadowRealm` +* **2.7** iterator chunking +* **2.7** import bytes +* **2.7** await dictionary +* **2.7** iterator join + +Proposals that were determined may be ready to go to Stage 4 Soon: + +* **3** Legacy RegExp Features in JavaScript (implemented in SM, possibly pending data from V8 counters) +* **3** Dynamic Code Brand Checks (widely implemented) +* **3** `Atomics.pause` (widely implemented) + +Proposals whose champions were not present, but are presumptively very active: + +* **3** Source Phase Imports (SM implementation has begun, SM very interested in shipping) +* **2.7** ESM Phase Imports (SM work to begin soon) + +Stage 2: + +Proposals The Committee Has Heard From Recently: + +* Deferred re-exports +* Function.sent +* Async iterator helpers +* Error stack accessor +* Async context +* Seeded PRNG +* Math.clamp +* Native Promise predicate +* Error.captureStackTrace +* Import text +* Object.keysLength + +Proposals we expect to hear from soon: + +* Symbol Predicates (JHD will present new motivation or split the proposal soon) + +Proposals whose champions confirmed they are backlogged but active: + +* Discard bindings +* Extractors (RBN needs to confer with pattern matching champions) +* Throw expressions +* Module declarations +* Module expressions +* RegExp Buffer Boundaries + +Proposals possibly in need of champions: + +* Structs (RBN is a champion but needs an implementor co-champion) +* isTemplateObject (JHD would like a co-champion) +* `String.dedent` (possibly, Hemanth HM and JRL are listed) +* Destructure private fields (no current champion) +* Collection normalization (no current champion) +* Pipeline Operator (proposal is somewhat stuck, additional champions may help move the proposal forward) + +Proposals we were not able to hear an update on, but are presumptively active: + +* Iterator.range +* Propagate active ScriptOrModule with JobCallback Record (CZW confirms async that this is related to AsyncContext and will likely advance alongside it) + +Other proposals discussed: + +* Dynamic import host adjustment has been superseded by dynamic code brand checks (stage 3), and should likely be withdrawn. +* `JSON.parseImmutable` may or may not be blocked on the progress of Composites (stage 1) +* Function implementation hiding may or may not be blocked on committee concerns and may be related to other ongoing proposals in the Error space + +### Conclusion + +Action items: + +* PKA will check with ACE on the status of JSON.parseImmutable. +* RBN will take a new look at RegExp Buffer Boundaries in light of CDA’s belief that it may be a very important security feature. +* SHS will reach out to KOT about the withdrawability of dynamic import host adjustment. +* KM will check with SYG about `Atomics.pause`, and possibly bring it for Stage 4 next meeting. + +## Composable value-backed accessors for Stage 1 (cont.) + +Presenter: Lea Verou (LVU) + +* [proposal](https://github.com/LeaVerou/proposal-composable-accessors) +* [slides](https://projects.verou.me/proposal-composable-value-accessors/slides/) + +CDA: That brings us to our continuation of composable value-backed accessors for Stage 1. LVU, are you— + +LVU: Yeah. I am here. Can you all hear me well? + +CDA: Yes. + +LVU: Okay. Let me share my screen. Okay. Can you see my slides? + +CDA: Yes + +LVU: All right. So… I was thinking of the—sorry. I was thinking about the changes I could make to this proposal, these were the original problems statements. And to recap the previous discussion, correct me if I am capturing anything wrong here. It seems there is strong consensus against solving composable accessors via syntax. And consensus that auto accessors are largely sufficient for public class DataProperties which was the first staple. There was consensus that solving composable accessors through built-in decorators is worth exploring. Tooling was brought up as an additional benefit of having standardized functionality for this. And there was some mild interest in the exploring alias accessor for delegation or forwarding use cases. Let me know if any of this is incorrect. + +LVU: Okay. So I thought about it some more. And even though originally when I presented this, I was of the opinion that syntax would be a better solution, I actually now think that built-in decorators would be a better solution. So I revised this table of pros of cons that I presented early. Mainly around yes, the issue with reliability can be mitigated through don’t use super long functions in there. Just use references. One of my main issues is that it’s lossy, it wraps the original setter and you lose it. That is separately. Perhaps the original setter can be preserved somewhere. Perhaps we can have—some kind of method to preserve original references when wrapping functions. That would be independently useful. There are solutions. I don’t think it’s a blocker. And even the imperative API, it’s planned to have decorators in object literals and that also mitigates it. + +LVU: Additionally there are advantages of using the builtin decorators in addition to the small amount of syntax—it is easier to implement and. It does introduce a very low-fi way to test the waters. We can add more of them because it’s cheaper. And if we find out that it is actually used all over the place, nothing prevents us from exploring syntax later. It’s a much better first step to start with decorators than is ship syntax up front. And there are plans to extend the decorators to other syntactic constructs. Imagine, I think I saw somewhere about having decorators or function arguments like imagine that? There’s so much possibilities. And it adds more motivation for implementers to support decorators which is a nice side effect. I like the idea of doing it with decorators. + +LVU: One important—interesting comment in TC39 delegate that NRO posted during the break was, he said I am looking at my accessory usage analysis, and they are numbers and not measured. 75% “property forwarding”. 15% lazy initial computation. 10% validation. 5% other. And scale that down because it actually adds up to 105, which assuming this is representative, because it’s one data point, it does validate that property forwarding is indeed a very, very common use case, more common than all the others it seems. And lazy initial computation is a big one I missed earlier. Yeah. So I think that was quite useful. + +LVU: So I am thinking, assuming that we have consensus for at least a part of it to go to Stage 1, before moving to TC39 or Github, remove value-backed accessors. It seems we have consensus that auto accessors cover this. Perhaps RBN and I will work together. But that’s a separate thing. And then focus the proposal on the composable accessors and split it into two proposals. One composable accessors via built-in decorators. We have consensus, we can just scope it down up front. And then alias accessors. Which is separate. And I will discuss why. + +LVU: First, composable accessors through built-in decorators, the previous problem statement. There are large classes of accessor use cases with strong commonalities and deserve better D and tooling support. And the property would explore which of them could impart over impact/efforts to expose the built-in decorators. It’s not about specific decorators. Not to very well date the lazy decorates. Part of the exploration is which ones do we need to add. And also, what namespace they live in or the signatures would be that would be TBD. I am not actually sure if the decorators proposal does allow multiple arguments. But any ways… in the weeds. That is one component of it. I guess I can ask, do we have consensus for Stage 1 for that part? That would be separate. + +CDA: Yeah. So do we have support for Stage 1 for composable accessor via built-in decorators? SHS, did you want to speak? + +SHS: Yeah. I support Stage 1 for this. + +CDA: RBN? + +RBN: I also support Stage 1 for this. + +CDA: And there is support from PFC as well. Also from MF. All right. Do we have any objections to Stage 1? Seeing nothing, hearing nothing, all right. Congratulations. I believe you have Stage 1. + +LVU: Whoo. Well, thank you. All right. + +LVU: So now, for part 2, that would be about alias accessors. The name is also TBD. I am not—I don’t like it tremendously. So it does seem that even among those large classes of use cases, accessors that forward in other properties, often deeply nested within subobjects are particularly large classes of accessor use cases. I knew it was big but I was also surprised by NRO’s percent of 75%. That was even bigger than I expected. It makes sense in retrospect. But it was bigger than I expected. It does seem to be particularly prominent. And therefore, it seems to be worth exploring separately, especially since it would be difficult to do this part with decorators with reasonable DX. First off, you need some way to specify a reference to a property. And that—if you want—it’s very clear that it should be able to support private members. So it does seem like you might need some kind of syntactic level thing. And I mean, in theory, you could do it with decorators but it would be very awkward. Like you could have—actually, the private thing I am not sure you could do with decorators in the current stage. Yeah, that seems to—that seems possible to need syntax. It seems like it could also benefit by composing well with the group of accessors proposal. For example, for many of these you want to expose a getter but keep the setter private. Do things like that. So who knows. Maybe it will eventually be merged into there. That is an open issue in the auto accessors and grouped accessors proposal, but essentially what this is. The concerns about generator functions based on reference tokens, that it seemed like not blocking for Stage 1, if we don’t resolve them, they might cause issues down the line, but it seems premature to decide we can’t move forward because of that at this stage. So this could be an exploration of different syntax. How it could integrate with the auto accessors group, the accessor proposal, that sort of thing. So and yeah. And I guess this is also—this is a proposal, does this have support for Stage 1? And again Stage 1 is the exploration. + +CDA: Yes. We have—NRO is on the queue. + +NRO: Yeah. So I am fine with exploring these. However, I am not convinced yet that it’s needed. I am saying this because most of the time we have this like proxy property, I only have the getter, not the setter. We save 10 characters in the getter. Removing the return in the keywords which can condense it, compared to the cost of the new syntax + +LVU: If you only have the getter it's less value. When you have the setter you have to repeat it twice. And repeating it—avoiding repetition of the property name is part of the motivation of the group accessors. People do seem to see a value in avoiding the repetition. Even in cases where you only have the getter. Make it explicit, this is an alias accessor, could be—is a more declarative and tooling could take advantage of it. There could be certain optimization for regular accessors. But there’s definitely less benefit if you don’t have a getter. + +LVU: And there could also be syntax—part of the exploration for syntax could be—could we have sort of an aggregation to expose multiple of them? Or maybe if you are exposing multiproperties from the same object, with the same names, there could also be a shorthand syntax around that. Something similar to the structuring. There’s a lot of these. You want to expose many of them. From protocols, for example, you want to expose multiple at once. For things like Element internals, and that sort of thing, you want to expose a lot of them. The current code can get quite repetitive even if you do read only. + +CDA: KM? + +KM: Yeah. I am not going to go to Stage 1 on this. I do think that I would need more motivation, stronger motivation for anything beyond that. I don’t know. I mean, from the—onlines of code we are talking about the first example. If that’s a large codebase, 105% isn’t that many. Like it’s—like, for the total number of accessors at all, so like—I don’t know when I have—I have written benchmarks for JavaScript, I don’t think I used accessors that often. Having custom syntax for it—there are cases where you want them. But like knowing they are there and having them be explicit is kind of my default assumption for this personally. I could believe that maybe this is, you know, a taste thing. But I would like to see definitely—I don’t know motivation would—what data would convince me, but I think beyond exploration, I would have a lot of concerns about adding this kind of syntax for just like aliasing. + +CDA: Nothing else on the queue. + +CDA: If there’s no other comments, LVU, or you didn’t have anything more you wanted to say, do you want to ask for consensus now + +LVU: Sure. I suppose. + +CDA: Okay. Do we have support for Stage 1 for Alias accessors? Not seeing anything on the queue. We do not operate on—we have + 1 from JHX for Stage 1. RBR, go ahead. + +RBR: So I am pretty much what we said earlier in committee, I believe this is something where the benefit is so, so small from a realistic standpoint of what we get and actually also the overhead for developers to know different syntax, they have to learn more, and one the great benefits of JavaScript was that the language is not super big from all the different possibilities. So adding more and more aliases, I can’t say do things, it’s burdening the users more than benefitting. And I personally don’t think we should discuss this. So, therefore, for me, like if someone wants to continue discussing that, outside of the committee, of course. But I personally would rather not have that here. + +CDA: I am on the queue with a reply. You know, I see the benefit is small when you talk about, you know, burdening developers. I mean I think it’s maybe we are getting a little bit ahead of things. Right now we are seeing the potential shape of something and not what it is. Until we see a concrete solution being proposed, which is a Stage 2 concern, I kind of don’t understand jumping the gun and blocking based on what is, perhaps, a misrepresentation in terms of what the actual result would look like. + +RBR: So one thing and I know this is about rules for Stage 1, they are very much on—well, we can discuss something. But I would definitely have very strong concerns about Stage 2 for anything that I can imagine in this case for it. Now, of course, maybe there are some ways of dealing with it in a way, this is so intuitive, we need this. I don’t believe this is the case. Getters and setters are super rare and I don’t want to introduce more of them. And like—and there are different code bases and that’s obviously the case. Some might use them more and then they are also paying the overhead of that. And like in this case, why can we not explore it further in a way that we believe it is even going to get to Stage 2? Do we have to discuss it in this round at the time, in this case? I might have a single perspective on that. I don’t know. + +LVU: RBR, would you object to an extension of the auto accessors proposal that allows customizing the back field or do you object to it being a separate proposal? + +RBR: Say that again, please + +LVU: So one of the open discussions in the auto accessors proposal, I have linked in in slides is customizing the backing field, which is essentially what this does. Whether the syntax is based on accessors like the accessor keyword or has a different key word, that is just syntax bike shedding. One potential direction is to expand the auto accessors group, accessor proposal to include that kind of support. Would you oppose that? Is it specifically around having a separate feature that you can object to. Or having the clarification for this something you are proposing + +RBR: I don’t believe we need the functionality. + +LVU: Do you not think it’s common enough for or do you think the boilerplate is too small to matter? + +RBR: Both. + +LVU: So do you think that NRO, for example, who said 75% of his accessors are alias accessors, do you think that is an outlier? + +RBR: I would say not all code looks like that. Like, I can—I am happy to look through codebases that I am working in and there, getters and setters are not as common. Ideally, they are not used that strongly in the first place and about using them as an alias and no codebase where this is the case, I believe it’s a mistake in this case. That’s my personal perspective. And yeah, there can be different views on how to write code. But that is just my personal take on it. And, therefore, that’s pretty much how a developer writes code. But like I don’t see that we should have this. I do like the explicit way of how a getter has to be defined currently. + +LVU: So I mean there are codebases that don’t use accessors at all. We recently heard that many people don’t like accessors at all. But if there was data, to convince you that the pattern is common enough, like would that—still oppose the clarity of solution to it? + +RBR: Even then, I don’t believe the benefit is huge. Because what benefit is there for? You can do it at the moment. We don’t gain anything from this really. + +LVU: I mean, the benefit is basically a product of how often does it happen and how big the benefit is per case. It’s not just about the size of the benefit per case. + +RBR: So we—save a few characters. Or what else is there? + +LVU: When you have 30 accessors in a class, it kind of adds up. + +RBR: I don’t believe that is justifying it. In this case, I would write a helper method + +CDA: I would like to go to the queue. NRO case, FYI, my data is about relative user, getters and setters usage. Not how much I have used getters and setters. Next on the queue is PFC. + +PFC: I don't think the view that the use of accessors in modern code is a mistake, has any consensus within the committee. I certainly don’t think it’s a mistake to use accessors. I think it’s debatable what the process is here, but I don’t personally think it’s appropriate to block something from Stage 1 because it uses a language feature that you prefer not to use in your code. + +RBR: Yeah. It’s a fundamental mistake in all cases. + +CDA: Okay. I agree with PFC’s comment. And I also wanted to state, as a reply to RBR, that talking about that it’s only saving this or that, there’s plenty of precedent in this committee for DX that looks like this. Or is similar. And so, you know, I don’t think that choosing now to be the time that, like, this is beyond the pale is prudent. There is a reply from CM. + +CM: Yeah. As the person who spoke up who doesn’t like accessors. This is—this should not be part of the discussion. The people who like them, like them. And you know, we could have that argument in some other context. But I don’t think this is the context to have that argument. To the extent that people really want to use accessors, the question is what form should they take? And that’s what this proposal is about. Just in general, the narrowing of the scope here, you know, satisfies most of my concerns about the problem statement. You know, to the point where I am no longer inclined to want to, you know, stand up, as the angry guy blocking things. I am not wild about it, but you know, it seems like at this stage, it’s fine and it’s pretty clear that people are interested in it. + +RBR: All right. I will pull back my objection as such. I am definitely not fond of it. It’s not a good idea to add—in my perspective, just words for something like that. But that’s my perspective. + +CDA: Yeah. Your comments are fair. Don’t get me wrong. I was typing something in the queue. But I wanted to reiterate, we all know this, but if we don’t like the proposed solution at Stage 2, then it doesn’t get Stage 2. Simple as. So there’s nothing else on the queue at this point. Before that, additional discussion, we did call for consensus. We did have support from JHX, I believe. Were there any other voices of explicit support for this? I know some folks are skeptical. + +CDA: I support this for Stage 1. PFC also supports this for Stage 1. Do we have any objections? Hearing nothing, seeing nothing. Looks like you have Stage 1. + +LVU: Thank you. + +CDA: Congratulations. + +### Speaker's Summary of Key Points + +* There was clear consensus against solving composable accessors via new syntax, and agreement that auto accessors sufficiently address value-backed public data properties. +* There is strong support for exploring composable accessors through built-in decorators, including potential tooling and DX benefits. +* Meanwhile, I also revised my position and now believe decorators are the better first step: they are cheaper to implement, lower risk, allow experimentation, and preserve the option to introduce syntax later if warranted. +* Usage data shared during discussion suggests property forwarding is a dominant accessor pattern, reinforcing the value of addressing composability and aliasing use cases. +* To reflect consensus and reduce scope, I proposed splitting the work: + 1. Composable accessors via built-in decorators (now Stage 1). + 2. Alias accessors as a separate exploratory proposal, especially for forwarding use cases that may require syntactic support and interaction with auto/grouped accessors. + +### Conclusion + +* Stage 1 for composable accessors via built-in decorators +* Stage 1 for alias accessors +* The alias-accessor track will proceed as exploration, allowing us to validate motivation, evaluate design space, and determine whether the ergonomics gains justify further advancement.