Brian and Eric talk with Dietrich Ayala about the ever‑expanding Pokédex we call the Web Platform — 15,000 BCD keys, 1,100 Web Features, and a growth rate that makes “catching up” feel like chasing a legendary.
Brian Kardell: All right. Hi, I'm Brian Kardell. I'm a developer advocate at Igalia.
Eric Meyer: And I'm Eric Meyer, also developer advocate at Igalia. And we're going to talk today with a special guest, Dietrich Ayala. Please introduce yourself and let everyone know what it is you're up to these days.
Dietrich Ayala: Thanks for having me. Yes, my name is Dietrich Ayala, and I've been working on, I don't know, open source since the late '90s and browsers since probably the mid 2000s. I'm currently running a small consultancy called webtransitions.org. It's looking at the challenge of how the web platform itself changes, reacts to user needs. Everything from adding new curves to the web cryptography, API, to adding peer-to-peer protocols. The size of the set of humans that use the web is vast and changing. The set of people that can change the web platform itself is actually quite small. So understanding what the pressures are that keep the web staying the same or pressures to make a change is a really interesting question, really challenging thing to do. So I've been working on that through my jobs at Mozilla for over a decade, working on Firefox and Firefox OS and DevRel and web platform strategy and all kinds of things. And then Protocol Labs for five years, working on a lot of bringing content addressing and peer-to-peer and transport agnostic thinking into the web platform. And then for the last couple of years, working on things like the web features project, the webDX working group or community group at W3C and some of this web platform transition work. One of the recent projects that I just shared this week or end of last week ... What week is it? Is the Servo Readiness Report, a roll-up of data that puts together Servo's web platform test pass rate, a couple of other data sets around what Servo actually does implement or what we can estimate that it implements in the web platform. And then cross-references that with the web features dataset to have an understanding of just how close or far Servo itself is from being a usable daily workhorse web engine, full disclosure. I've been working with people at Igalia for probably for the last almost five years now on various projects of all different stripes.
Eric Meyer: Yeah. And we'll get to the writing of this report in a minute, but I'm curious, you said you have a web consultancy that works on web platform change. So what kinds of jobs or clients do you do for web platform change consulting?
Dietrich Ayala: Web features project was one. It's hard to talk about how to change the web if we don't know what it's made of. And one of the fascinating aspects of BCD and browser compat data and the web features project is just how much we learned about how much we don't know of what's actually in the web. And so I mean, we could probably spend, and you probably have spent several episodes here talking about things like BCD and web features maybe, but that's kind of, I think for me, a very important and core piece of how we think about the web as a whole. So that was a funding from Google for a group of people to participate in that working group for the specific purpose of backfilling web features, going and looking at BCD properties to the early web and putting them into buckets, as it were, curated buckets that you call features. And so we took about 18,000 total BCD keys, about 3,000 plus were deprecated and we kind of just set those aside. And for around 15,000, we condensed, I guess is maybe the right word, those into around 1,100 web features. I've done some other grant work on things like bringing the ED25519 curve to web cryptography API and then other things like the work we've done also with Igalia on alternate schemes in engines and browser extensions. And so the recent work, adding the scheme property to the web manifest for extensions and getting that hopefully some day in all three engines, but at the very least interoperable between Chromium and Firefox as a start. And so those are some of the, I guess when I say web platform change, those are some examples. Bringing things like contragestability peer-to-peer protocols, or at least peer-to-peer some type of functionality to engines is also important. And then much longer arcs of what does it look like and what would it mean and what would the path be to a fourth major engine and an engine that is multi stakeholder governed as opposed to one single company, be it a US company or not.
Brian Kardell: Yeah. I think that's a really interesting lens to talk through this, not because we have a vested interest in Servo, though we do. It's full of interesting things to think about what does it mean to be the web, to be a web engine, and what are the use cases and how does that evolve over time? Because you'd like to think that there's like one really nice answer, but there's not. You can make a 'web engine' that does things that the web can't do and you can make a web engine that can only do a fraction of the things that the web can do and those can be really, really useful. So electron, anything built with electron is using web technology. Node is using web technology. Bun is using web technology. There are all-
Dietrich Ayala: [inaudible 00:06:29] web.
Brian Kardell: ... these mix and matches. And even in the web itself, we don't necessarily have total agreement because we have web views, which are really close, but not really browsers exactly. And then we also have experiments from, like, Google where you have isolated web apps, which currently only work on Chrome OS, but they're like a whole nother special kind of thing and these Fugu APIs of which there are many, like there are so many Fugu APIs. So like, yeah, it's really interesting because a lot of people have said like, 'Maybe we need a baseline. What do you need?' And my answer to that is like, 'Yes, but is it a what or is it like for what?' Is there a single answer for what or are there maybe like a couple of different flavors of that?
Dietrich Ayala: Yeah, I think different flavors is a great way to think about it. The tags, ethical web principles say there's only one web, but the reality is that that one web doesn't work for a large number of people. It works for the largest common denominator. And so when you combine that with the fact that, as you've said, and we've already talked a little bit about, but probably not enough, that there is not actually just one web. If you only think about the compat and interop problems that we have today already. The first Mozilla, so Kadir Topal and I put together the first Mozilla Web DNA report, and this was the largest survey on MDN probably around, I don't know, 2019, I think we launched it right before I left Mozilla. And it was surprising to us that we expected some feedback there around capability gap like, 'Oh, this API is only on Chrome and I want it on other browsers or this API doesn't exist on any browser and I want it.' And instead, what we got back from an incredibly large amount of respondents to that survey was compatibility. And so when we say there was only one web, the biggest problem that developers have and anyone who's been working, a lot of people have been working on the web and webcompat for a long time, understand it as an under-resourced and very challenging problem. And things like Interop have gone really long ways to fixing some, mitigating some of the pain here that developers have, but there's still differences. And Fugu is a great example. The Firefox West itself was an experiment in adding every capability smartphone needed as a web API. Now, whether that was folly or not is probably a whole separate episode or maybe a series of episodes, but we had the FM radio API. That was not done for fun. That was an API that was added because whole markets of tens of millions of people considered it table stakes for them to be able to use a phone, for the phone itself to be usable. And the pushback even internally on the team being asked to implement that API was very, I guess, Silicon Valley goggles of people who are used to having a web that works for them every day because they have access to money and fast connections. And so thinking about it as different flavors, and that's probably a really nice high level way of saying the extreme user needs, user needs on the edges that are not being met by the web we have today is a really hard problem. And when you have multiple engines and each one of those engines, decisions are made on implementation by a small set of people, it's hard to make that one web argument. And so I guess a great starting point I think is putting aside anything like P2P or content addressing or specific cryptographic curves or even putting aside anything around should we have another engine, starting from an acknowledgement of the web that we have today is actually varied and it doesn't necessarily work for everyone. And I think those two things put together is for me, a better place to start conversations about how to change the web and what should change. The interesting aspect of the web features project and BCD is that it actually gave us some tools for talking about this. So when you say like, 'Oh, well, a web without X, Y, or Z, or ...' One of the interesting experiments I did recently was saying, what would be the web if we took HTTP out of it from the perspective of a BCD dependency? So I took all of the BCD keys, I analyzed which ones had HTTP keys in there because the BCD has this concept of roots, if you're not familiar with it. And so all of the keys that are in the HTTP root of the BCD dataset, I then mapped to web features. So there's 1,100 out of that 15,000 things that are on the aggregate of all implementations in the core browser set of what is the web. And then I removed all of the features that have a dependency on an HTTP BCD key. And what's left is, well, obviously a lot of CSS, but a bunch of HTML tags, fewer overall percentage of the web than you would think, given how complex things like JavaScript and CSS are, and then a bunch of JS keys and for things that are not DOM APIs. And so you have, all right, we can now reason about what a transport agnostic, and I'm air quoting here because of what you said earlier, Brian, of what the web is. Is that the web? If you take HTTP out, is it still the web? It's [inaudible 00:12:18] JavaScript and CSS. What if it's a different ... Is it Bluetooth the transport or BitTorrent? And so now that we have these tools and web features and really, I mean, a big props to Google for putting money behind this was this investment and backfilling the web feature set. We have some tools that allow us to, from a data driven perspective anyway, look at what the web might be and ask these questions in a way that lets us change the shape and ideate on and imagine what different shapes of the web might be. How much of the web is a service worker, for example, or how much of the web is CSS when thinking about it in terms of a specific set of keys or features? What are the dependencies of web features on their transport or on each other? And so that type of dependency can make sometimes a web that is simpler in some ways, but also make it more complex than other ways, especially when we're looking at making paradigmatic changes in order to meet use cases that maybe aren't being met. I don't think there's the right answer, Brian. Is there one web? But I like how you ended on different flavors 'cause I think that illustrates the nature of the question really well.
Brian Kardell: Yeah. So even if you have an embedded device or a thing that was built with electron or whatever, maybe it is ... You always have this question of like, how do you get the content in the first place? And do you have to run a HTP server too? One of the things that you can do frequently is just use a file protocol and pull the thing in locally because it's your device. But everything in the web is really geared around domains and what is considered safe. And lots and lots of stuff either doesn't work with an approach like that or doesn't yet. So would you want to be able to install a service worker that way, for example, because the service worker is kind of like a HTP server that would be kind of great actually. I remember a bunch of years ago, Jeffrey Yaskin was trying to do some packaging on the web idea, which is like maybe the fourth or fifth or sixth take on this, but it is sort of trying to answer this question too, like, what if I give you a package of stuff and I say, 'This is the bundle, this is the bundle of stuff.' And it's not literally necessarily coming from a URL. What if I give it to you on disk? And I participated in some of those things because I said like, 'Hey, I would like to do that and install a service worker and be able to pre-populate it with data so that I can do exactly this kind of thing.' So yeah, I mean, I think we have a lot of things that we need to figure out and it evolves over time as we sort of expand the use cases and the needs and the experiments that people are doing. You can open a tag issue on this, and I will say that the tag is already discussing some of these sorts of things. So yeah, I mean, I think it's a great topic and it would be interesting to talk about how your recent Servo work ties into this because what you're doing there is, I think, really interesting because it's a question that comes up a lot that's like, what would it take to do this? And you posed this question, I believe, in a way of making it competitive as a daily driver. What would it take to get it to where you could literally compete with Chrome and ... Right.
Dietrich Ayala: Yeah. So the driving question there, I mean, I had a couple of ulterior motives. It's a multi-layered message in this report. One is actually answering the question concretely, like how usable is Servo? And I used, and we haven't really touched on Baseline yet, you mentioned it. We've talked about web features, we've talked about broad compound data. Baseline is basically a way of measuring and identifying what a set of web features and browser compat data is and with some agreement for what browsers have actually implemented and basically aiming at a stable web saying this set of BCD keys and web features is after two years of implementation considered a stable web. And the label that's used is widely available. So it's a set of features on the web that are considered widely available because they have been implemented in a agreed upon in that group set of browsers in the browser engines as a way of. ... And in the Servo report, I used that as a proxy for daily driver usable web engine. So I said, 'Okay, let's take that set of baseline, widely available web features and see how much of that has Servo implemented as a proxy for, okay, if Servo's implemented all or most of those, then it would be a generally usable web engine.' There's all kinds of other things that go into making web engine like you could implement 80% of those features, but none of those could be reasonably performant to run on a mobile device, for example. So it's not the full picture, but just using this as a way to at least try first step at answering that question, that's what the Servo readiness report does from a data level. The second aspect of the report is it asks that question in the context of, all right, if we answer that question and we have a sense of what the delta is between what has Servos implemented so far and what that baseline widely available set of features is, how long would it take given some number of resources for Servo to, again, 'catch up' with the web? One of the interesting aspects of this is that it's not possible. There is no catch up moment. The web is actually changing rapidly all the time. And I posted some research that I did to Bluesky a few months ago showing that the rate at which the web is growing is actually increasing year-over-year. So every year the web increases by more BCD keys than the year before. And so not only is there no catch-up moment, even if you were implementing features at a constant rate, you would still be falling behind. And so for something like Servo to be 'usable,' by using BWA based on what available as a proxy, you would have to basically just catch up to the point at which you're hitting the same velocity as the three engines that we have today in implementing new web features. And in the three engines today, we already talked about there's differences. They're not all the same. Some implement some APIs, some of them want other APIs. So the set of new baseline widely available features is probably the best data point we have today for understanding what a generally usable web browser engine would be. And so that's where I got to with this report. We spent a lot of time talking about the actual details of understanding how we measure Servo's readiness as it were. It's not easy. It's very fuzzy. Not a lot of it is ... I did make some leaps here and there. Well-educated, hopefully, and well-researched leaps. So far, the Servo folks haven't cried foul. I had a few other folks review it. Things like, okay, SpiderMonkey is already tested in Mozilla's CI and testing. So we can basically say, 'Okay, Servo already gets all those BCD keys and web features.' So things like that, for example, a few other areas, but generally got a decent picture that Servo would be generally usable by 2036 or 2038, but then already behind, given this moving target of velocity and the growth rate of the web itself from a feature standpoint. So that's the Servo baseline readiness report. Is bad news? Good news? I don't think you can really put a value judgment on it. One of the ulterior motors I had was to ... I think a lot of people underestimate what it takes to make a web engine. You could say, 'Oh yeah, it's probably a pretty big job.' But very few people understand that there's at least 500 and in a couple of cases, over a thousand people just working on the engineering aspects of these engines of the three of them. And even basic conservative estimates on cost from a salary perspective, not counting other OpEx like taxes and legal and HR and all the things that it takes to run the company that are not just an engineer sitting there writing feature code, it would be incredibly expensive, most government's budgets, not necessarily the US Fortune 500 budgets. It would be very small amounts of most of them surprisingly. So I think it's fun to do a project like this where you can ... They're not perfect numbers and they're not precise, but the exercise of ballparking them is already useful because it gives you some perspective. And that perspective helps when communicating to governments, to NGOs, nonprofits, to web freedom fighters, and to hacker news commenters, the nature and scale of a project like this.
Eric Meyer: Yeah. It's a little bit concerning to me when you say that this might be beyond the reach of government budgets, but the Fortune 500 companies can manage it. That's a little bit yikes, not going to lie.
Dietrich Ayala: There's governments of all different sizes.
Eric Meyer: Well, that's true. Yeah. So I mean, you got down to some real nitty-gritty in this report, the server readiness report specifically.
Dietrich Ayala: Yeah. Yeah. The WPT alone is a giant and gargantuan data set. Understanding how some of the metrics are used is also pretty interesting. So I got usage data .... Mozilla actually has public usage data. One of the things I wanted to look at too was like, all right, could you change the order? So sure there's the base level of data. We have BCD keys, web features, let's call those the, I guess I have to overload the term, baseline, for what we know about the web and use that as a measurement stick for what an engine could implement. The question of what it should implement to be usable sooner or to get more bang for your buck if you are going to throw five million, 10 million, a hundred million, towards making something like Servo usable sooner. That was another whole exercise where I looked into the Chrome's usage data, the HP archive usage data, Mozilla's usage data, which is public but not published, if that makes sense. I asked and they said, 'Yep, it's all public. It's all right here, but we just don't have a nice website for it.' And they were happy to share it. And I asked and they were like, 'Yeah, no, it's not private at all.' And so that's great, more signals. Each one of those actually covers different parts of the web and in different ways. Does mobile or desktop matter really in ordering from a priority standpoint? Again, very difficult, if not impossible question to answer. But I'm a product manager for a new web engine that's deciding to try to meet parity with Gecko and Chromium and WebKit, right. How would you make this decision about where would you put that funding towards? And so that was a whole another exercise that I have a ... I think I'm probably going to end up having to publish a separate report about how would you pick what the most used web APIs are or understanding or even different question, different flavor question on the same data. How would you understand how much it would hurt to not implement something? What can you leave out? And a lot of the data basically says the long tail of the web is just so long that any small bit will hurt too. And that's a lesson that anybody who's worked on web engines does understand from user feedback. Every web engine knows that even leaving out that one little, little teeny wincy CSF property that you're like, nobody's used that since 1999, you're going to hear a complaint sooner rather than later. So the short answer is you can't leave anything out. The web's constantly going, but the longer answer is you could probably get to leave 5% or so out is kind of where I netted out in this other separate research project around that particular data set.
Brian Kardell: Yeah. Well, just look at what happened when somebody suggested we remove XSLT. Everybody was like, 'Well, nobody uses XSLT.' And it's like, 'Well, not nobody actually. There are people and it would hurt more than you initially thought.' Maybe it's still okay. I think probably most of the use cases are like people can make a choice to use a browser that doesn't support that and maybe that's not the choice people would make, but the only way we find out is by offering those choices. But I've written about this before because we don't have a lot of insight. The one thing that I always say is that if you say like, how much should we spend? Everybody will intuit. It should be as much as the one who's currently spending the most, right. So everybody should spend as much as Chrome because as developers, we like the investment into the web. We want everybody to spend that much money, but the economics of it means that there's always one that is investing the most and there's always one that's investing the least. And I think currently these days, after a long, long time, it's like Mozilla is investing the least and we know what their public numbers are and we look at the size of their team. And so we've floated some ideas before about like, well, what would it cost to publicly operate Chromium? If it was divested, what would it cost to really run Mozilla? Maybe it could be a little bit more efficient. Maybe you could gear something toward very specifically the web engine, but I think 400 million is a good number probably, something like that. But here we're talking about like starting fresh, we don't have a lot of legacy. I think it is really interesting how much you can do with a small team. And I wonder where the sweet spot lands, because like you even mentioned your thing like Brooks Law, right. So there's like all of this interesting stuff about what does it mean to compete and what do you need, what don't you need, what do you sort of not think that you need initially? And then you learn pretty quickly that you do need. If you're the last ones implementing something, you have a bit of an advantage because you don't have to do as much questioning and thought, like you just have to follow. So it's a little bit faster to catch up.
Dietrich Ayala: Yeah.
Brian Kardell: And like what is the sweet spot? So this number that you came up with, I think is like 50 million, I think. Is that right?
Dietrich Ayala: So I haven't come up with really a final number because this was really an exercise in saying, 'What would it cost to get to BWA velocity in three years?' And so that ended up being, I don't think it was 50, I think I was hand waving 50 million like, okay, you'd have to add some OpEx there that is not accounted for in that final number. And even that would probably be much higher.
Brian Kardell: But I think you could make a lot Of progress with $50 million a year.
Dietrich Ayala: Yeah, yeah. Absolutely.
Brian Kardell: Servo isn't getting $50 million worth of year in investment currently. And I think part of your point, if I understand it correctly, is if you threw $400 million at Servo, you couldn't efficiently spend $400 million on Servo.
Dietrich Ayala: Right. Yeah.
Brian Kardell: So how do you scale it up? The plan is almost as important as the dollar figure, right?
Dietrich Ayala: Yeah. Absolutely. And that's why I ended on that note earlier of a separate research project that prioritizes, at least ... Prioritizing is probably a strong word, but gives you some frameworks for thinking about what APIs to implement in the web platform or not. But I think this question that you're asking is also, the way you asked it is through the lens of the engines that we have today. And I think there's a couple of interesting aspects to the engines that we have today. I mean, one, all of them are decades old, right. And two, they all have product vehicles. Servo does not have a product vehicle. And so when you're looking at a company like Mozilla, and I say company because the Mozilla Corporation is a company, it is wholly owned by the foundation, but the majority of the employees work at the company itself. And it is a for-profit company that makes nearly half billion dollars a year. That's what losing the browser wars looks like half billion in revenue per year. And so you have to look at something like an initiative for an independent web engine very differently. And so there are a couple different things. You mentioned web views earlier. Web views are a really good candidate for the product vehicle and the industry stakeholders in something like an independent web engine. That means that companies and products that are building on web views, if they have an independent multi-stakeholder governed web engine that they can use for their applications, they are not held hostage to the choices that the host operating system makes with regards to its web engine to some extent. There are other market dynamics that play there. Like, Apple won't let you use a third party web engine in most places in the world at this point. There are a few rare exceptions, even those that constraints are, as you probably talked about on this show before, very high and unique. So I think that's a way to think ... I would encourage you to think about it in a non-product way, or I guess in a diversified product way. And so, one of the reasons that I mentioned multi-stakeholder initiatives for an engine four is 'cause I think that's a very important aspect of moving the web forward in this way. We are in this intractable place where there are three companies, three engines, they're all US based, they're all for profit companies, and the set of decision making around those engines happens inside those companies. Now, there are standardization pressures, but as we've found, those are not required. You can implement or not implement some set of APIs. You can intentionally not implement some APIs and features on the web as well for strategic purposes, if you have some of the couple of these companies, 10 fingers and 10 different industries that you're involved in. So the way I think about the Servo question, and again, like you said, Ladybird, let's just say the independent web engine question is that it would be different for a few reasons. The first one is what you mentioned before. Being a newcomer gives you some advantage and you have a different perspective and a different decision making matrix that you can use to decide what to implement when and why. The second is a diversified set of stakeholders. So if you do take the multi-stakeholder approach towards governance of an independent engine, or I guess a fourth engine we can call it, you would have product input and requirements coming from a broader set of companies as opposed to one company and they're one or multiple business models internal to that company. I think there's also another aspect of this, which is the fact that technology has changed. Even Ladybird, like Servo is what, almost a decade old at this point, and Ladybird is over half a decade old at this point. If we're going to start thinking about how to close the gap, we would have a different tool set. We would be able to think about how to build the rest of the web platform in a very different way today than we would even two years ago. David Thompson did a fantastic talk at FOSDEM where he imagined a basically modularization of how we develop Servo components using Wasm. And so I guess a more modular approach towards building the engine itself and building web platform features that would, for me, I don't know how he was thinking about this necessarily. I didn't talk to him enough about it after the talk, but it sounded to me more like the radical subsets and radical supersets approach to web engine development, which is compositional web engines. This idea that we have a web engine, it does implement maybe all of the features, but you for your product don't necessarily need to ship all of those features. And so things like Nova, things like WebViews, different projects that don't require the entirety of the web platform and the whole kitchen sync. But when triangulating all of these factors into a view of how you would move a fourth engine to generally usable, I think is really important and not viewing it through the product centric, single company centric view of the browser market that we have
Eric Meyer: Today. Yeah. The modular development approach seems like the sort of thing that would lend itself to multi-stakeholder, which I'm sure occurred to you as well. And it also, it harks back to that small piece is loosely joined approach to developing things where it's like, this is the Unicode module and it can talk to all the other things and then maybe at some point a better Unicode module replaces it, but all the other things, just as long as they can talk to each other. I mean, maybe that's how it would have to be in order to match the velocity and resources of a Fortune 500 company or a Fortune 10 company or whatever, because it would sort of take that open source approach of this person, I don't want to reference the XKCD, but everyone knows which XKCD I'm talking about, right. But one person is doing this thing. And we just had that this week where a thing that a bunch of people use that one person obsessively maintained for a long time is now maybe going in a different direction that caused a whole big hullabaloo, which I don't want to derail this conversation into that either, but like left-pad or whatever, this one thing is really good and the ideal would be that this one thing is really good, but we can replace it relatively easily and painlessly. I don't know how realistic that is, but it does feel like the sort of thing that would make it easier to sort of have a multi-stakeholder involvement, right. Like, instead of trying to find a government or a set of governments that are willing to fund a humongous project, you maybe have one outfit that's funding this bit and another outfit is funding this bit and maybe this government is funding this bit and that consortium of nation states is funding this other bit and they all talk together. Sounds very-
Dietrich Ayala: Consortium of mathematicians.
Eric Meyer: There you go. It sounds very pie in the sky, I guess, but ...
Dietrich Ayala: I don't know. It kind of just sounds like open source.
Brian Kardell: Today, we do use open source in the open source engines and you do wind up with what must be, at this point, the most widely cited XKCD comic situation where, again, to go back to XSLT, that was one open source project that just, like, not being actively funded despite being used in multiple important projects. And so rather than fund it, the idea is like, well, I mean, maybe we just have to remove it. I don't know, that's tough because if you break it down into smaller and smaller problems, thinking like the typical open source way, you do wind up with these lots and lots and lots of things sitting on top of something that is itself so hidden and not seeming important enough to get the funding, but it is really important. And I think that one of the things that makes that really complicated is that frequently the kind of things that you find that are problems with that are because of the sorts of things that open source lets you do. So like you might end up using this in a browser and the people who wrote it didn't even ... They're not browser engineers. You're not thinking about how you integrate this in a way that's like memory safe and all that kind of stuff because it's not why they even wrote it. And then you wind up getting like lots of asks of your open source project. So yeah, I don't know. There are definitely things like this. Skia is another example where everybody works together to do this really complex thing. And to an extent, I just want to say like even though there are governed differently, the three engines, all of them do have like many stakeholders, but their decision making and governance and everything is different. Maybe also if you could speak to the Chromium fund now, which exists, which is like a common pool of money that Microsoft and I think Opera and ... I don't know, I shouldn't name them, but you can go look it up 'cause I'll miss one and I'll fail to credit somebody who's doing really good work, but they're putting money into a common pot and then they have a governance to spend it. Anyway, yeah, I think it would be interesting if you could speak to some of these things about like, how is it different and how is it the same to current models?
Dietrich Ayala: Yeah. I think before speaking to things like pooled funding and multi-stakeholder governance, one thing I did want to know is like, I think all of us know the pain of open source, the double-edged sword of both the bus factor that we talked about, but also maintainer burden and burnout, folks working that are underpaid or completely unpaid, it's very, very common. And so it's not some panacea, it's not some magical wand that we can wave and say, 'Yay, open source, we have a web engine.' Right. One of the other things I wanted to note, I brought up earlier in this experiment in removing HTTP from the web platform, from the perspective of BCD keys and web features. One of the things that was really interesting that I learned in that exercise was that interdependency between features. And so when we're thinking about what a modular web looks like, you can't, as you said, just rip a feature out and then the rest just keeps working. I mean, even if you implemented it that way in engines today, you could not do that because of the dependency that features have on each other. And so, one of the ideas I had when I was doing that HTTP experiment was maybe taking this set of BCD keys and web features and actually measuring the amount of interdependency and evaluating, looking at whether that would be useful for evaluating, I guess, the complexity health of the web platform, like how much of it is modularizable and how interdependent are different parts of the stack, the web stack, are they heavily interdependent? Is that interdependency growing? Is it growing rapidly? I think we haven't really generally thought about the web in this way. And so, one of the things that we did notice in that set of web features was a kind of slow accretion of interdependence where earlier web features five years ago, 10 years ago, were all within a given BCD route. And I'm kind of hand waving a little bit here. This is like, I'm reading these tea leaves and we had all these conversations as a group around all, as we were going through the web feature development, and these are some observations that we have but never really formalized. The data's there. So it's definitely possible that we can do this. But what I wanted to do was measure that interdependence over time to understand whether the web platform itself is actually concretizing on itself where we're seeing lots of work for web platform features happen at IETF and at the W3C and at the WHATWG and they're pointing at to each other. Is that good? Is that bad? Does it mean the web is more resilient or does it mean that the barrier to entry in that market is so high that is now insurmountable? And so I don't think there's a clear answer there, but we have data now to be able to look at that question and talk about it as a group, as the web, and to understand at least whether it's good or bad, at least be aware when we are increasing or decreasing the interdependence complexities of the web platform itself. And so that's one of the things that's on my mind when I talk about the web modularization or the super sets or radical subsets of the web. The more you have that interdependency and interdependence complexity, the less you're able to make those types of changes to the web to respond to user needs.
Eric Meyer: So what did you find or what is your feeling as to whether or not the web is calcifying or ...
Dietrich Ayala: It is increasingly interdependent, I think is what I left with. And I have a couple of scripts written to be able to tell that story more clearly, but I have not done that work yet, so-
Brian Kardell: I look forward to-
Dietrich Ayala: ... not in time for this conversation.
Brian Kardell: Look forward to reading your thoughts on that.
Dietrich Ayala: For the Chrome Fund, though, I don't really know much about it other than what's been mostly just publicly announced stuff. But my first thought was is, it feels like we're missing one of those for not Chrome, just for the web as a whole. And that's why I keep coming back to this banging of the drum for multi-stakeholder governance for an independent web engine. I feel like I will just keep saying that over and over, multi-stakeholder governance for an independent web engine.
Brian Kardell: I guess one of the things I was asking you to contrast too is like Mozilla is itself, like, if you look at the sort of pie chart of who contributes to it's like a trillion people. It's the most sort of open sourcey of them. Google's model is they have API owners and they have a process that requires you to get three positive reviews from API owners, but the API owners over time have been not just Google, now they're Google, Samsung, Microsoft, I think Adobe even has one, Igalia has one. So there are opportunities for organizations to have some say in the governance in a way. It's not the same as funding, although it is because the way you get that is by participating in the thing which requires funding. So I don't know. It's interesting to think about what does this mean? I don't know how we do this. I've also tried to pester W3C to think about how could we do something like this because, not to this extent, but within say The Khronos Group or something, there is like, 'Hey, you can pull funds together and fund this thing.' It's really interesting to me to think about how this would work. I love reading other people's thoughts and ideas about how could we make this happen and what exactly is this?
Dietrich Ayala: One of the interesting aspects of this type of thinking and this idea of pooled funding is the S in multi-stakeholder, right. Who are the people that are benefiting? I think this is a broader conversation that's happening in open source and has been since forever. I think that maybe 2001, I had my first open source project and I realized that this one person that was peppering me with bugs all the time, they're really good bugs. I was very appreciative, but also it was a lot more work was from massive German publishing. I looked it up on the internet, 2001 in my very slow browser, and I was like, 'Oh, this is a billion-dollar company. They're using this little software project I use.' And so that has been kind of the context of me as a new software developer, new to open source for decades. And that's not as an unsolved problem today, the free writer problem as it were for that billion-dollar companies are getting. So I did a little number weighing there as well a while back and posted that to Bluesky of going, 'Okay, for the Fortune 500, what is the median IT spend annually?' And then I say, 'Okay, what if I took 0.5% of that annual IT spend?' And it ended up being like each one of the Fortune 500 would put in 250 million per year US into a fund. For example, again, just like zooming out, get some data, try to tell a story that gives some perspective on how challenging these problems are. And so that particular one, we were talking earlier about like, well, how much would it cost or would you hit a ceiling on how much ROI you'd be getting for the money you threw in? We have very large organizations to point out today that are spending billions and billions of dollars on things. We can do things with funding. I think that's less the problem than the fact that all of the money is there and it is not being paid back into them for the value that they're getting out of the software that we're giving for free. And so that I think is a harder problem to solve. And again, maybe probably a topic for a different episode.
Brian Kardell: So the last thing that I'll say on this 'cause we need to wrap it up is I recently wrote a blog post just talking about sort of the roads and bridges because if you look at the history of roads and bridges in America, at least, lots of them were created by companies and they were for profit and eventually they became necessary for sustaining the cities around them. So you couldn't have the industry and the cities and everything without them. And so we slowly managed to like collectively pooling them and stuff like that. And even in Pittsburgh where I live, there are still some bridges and things that are privately owned and operated and that's fine. They're used by exclusively companies moving heavy freight in and out of steel manufacturing. But for the most part, we have developed ways to say, 'Well, we need to collectively figure out how to pay for this because it's fundamentally a part of life now that's just necessary.' And I feel like the web is that and the web browser is that. And so I don't know how we figure it out, but I think we got to figure it out. And I'm glad that there's more and more conversation around it. And yeah, so thank you for your continued thoughts on this and look forward to reading more.