Brian Kardell: Okay, hi, I'm Brian Kardell. I'm a developer advocate at Igalia.
Eric Meyer: And I'm Eric Meyer, also a developer advocate at Igalia. Welcome to Igalia Chats where we have a guest today, Bart Veneman. Please introduce yourself and let our listeners know what it is you're up to.
Bart Veneman: Hi, I'm Bart. I'm a developer from the Netherlands and I focus mostly on analyzing CSS. I have a website called Project Wallace. That's where most people probably know me from. I do a lot of CSS analysis in various ways, mostly static analysis, looking at CSS from different angles. And that's been my game for at least the last 14, 15 years, as a side project, because I have a full-time job. But as a hobby, that's what I've been doing.
Eric Meyer: CSS as a hobby, can be a lot of fun, even CSS analysis. Before we get started, I just have a question. Why did you call it Project Wallace?
Bart Veneman: That's a tricky one. Because I felt I had to come up with a name in the very beginning and people were calling their side projects project whatever. I think Project X was a thing at the time back then. And I really liked the Aardman Studios thing with Wallace and Grommet and the clay animation stuff. And I was like, yeah, that's also the vibe I'm going for with funky, funny feedback on your CSS. That really hasn't worked out, but the name stuck.
Eric Meyer: I wondered if it was a reference. I love it. And so when you say CSS analysis, what level of CSS analysis are we talking?
Bart Veneman: The thing I'm mostly looking for is the stuff that not a lot of folks are looking at. We tend to do analysis on a per file basis using style in or maybe some other tools. But what I want to know is a holistic view of my CSS. As we ship it to our users, what is the CSS that our users download and what is in there? One file may contain seven unique colors and it's the best CSS that exists for that specific component. And then when everything gets bundled and shipped to our users and then, oh no, third party plugins are also shipped to our users and they use slightly different colors because we didn't apply the rebranding there. Font sizes are off. This holistic view is what interests me most.
Eric Meyer: So from your latest state of real world CSS usage article, you were looking at 100,000 websites. Is that generally the scale at which Project Wallace operates or does it also operate at much smaller scales?
Bart Veneman: Oh yeah, good question. Yeah, it never does that. It always operates at a single website, or usually mostly a single page level. Like you go to projectwallace.com, you enter one URL, and that's what it tries to scrape and do analysis on. So the CSS selection was a different beast.
Brian Kardell: I'm also a big fan of looking at the data of the web and the health of the web and what authors are doing with the web. And I think it was maybe 2018 or 2019 was the first year that they did the Web Almanac, and that year I edited the markup chapter. And there hadn't been a markup analysis in, I can't remember, but like maybe 20 years at that point, something like that. And it's amazing because we're here writing standards and we're talking about what people do and what people want, but we don't have anything looking back at that to say how is it successful? How is it not successful? And so we've been involved with that until it changed in, I think, 2023 or 2024. But Eric wrote the CSS chapter in 2021. We haven't had one in a while because the methodology and people involved and everything changed. But how do you choose which 100,000 sites or million sites or billion sites or 3.8 billion sites, or whatever it currently is in the archive. It's a really, really large number. And then what do you do with that data, is full of interesting questions, I've learned anyway. So I'm curious; to start with, how did you choose the 100,000 URLs? Are they from the archive dataset? Are they 'Alexa, top URLs' or?
Bart Veneman: It is the first 100,000 websites from the Majestic Million List. And I came upon that list because everything from Project Wallace used to be based on the CSS3 library, which is an insanely good, standards-based, spec compliant CSS parser. It's been maintained by one dude for a long time, and it's an amazing library. And he has a script somewhere where he uses this Majestic Million list to download a lot of websites and then test if his parser actually works. So I figured, well, that must be a pretty representative list, then, to use for whatever I'm going to do. Because when I started, I had sort of an idea of what I wanted to do, but yeah, first I needed a lot of data.
Eric Meyer: And what did you find? What was the thing that really stood out for you that you found?
Bart Veneman: One of the biggest things for me is that, looking at adoption of modern-ish CSS made me feel, well, both really old, because I also included information about baseline status for several of the features listed in there, like layer being available, baseline available for how many years. And this stuff all still feels new to me and seeing container queries used in almost 10% of websites, wow, that was something. Adoption rates are insane. Most of them I did not expect.
Brian Kardell: Yeah, because of the dataset and everything, it has some, I think, some of the similar biases in the dataset that... You're talking specifically about public pages, right? So not something that you could log into, which is a lot of what we use is what you log into. And what you're analyzing, is it also like the homepage?
Bart Veneman: It's the homepage and the homepage only.
Brian Kardell: So I say this all the time. We have modified the HP Archive to do... It tries to look for, I believe the way it works, is the largest link in the page in terms of what it would render visually. And it clicks that and follows one link to find another page to measure so it's not just the homepage. Because I say all the time that maybe Apple's homepage is not anything like the rest of Apple. Or like google.com, is it really representative of Google? I don't know. Google.com is definitely not one input box. I think it's really, really useful, but you have to also take it with a particular grain of salt. One of the things that I also noticed is that, like you say, adoption rates are very high. What I've found is that when adoption rates go up really, really quickly, it's very frequently common cause and not like everybody rushing to adopt it individually, because those millions of sites, most of them are probably mostly chugging along. They're not being constantly rewritten all the time. So they get an upgrade to a WordPress theme or something like that. And that WordPress theme is like, 'Yeah, container queries, that's going to make my life so much easier' and they replace it. So you get all of a sudden 8% of people, it just spikes. It jumps up when you look at the data. And so that stuff is also really interesting to me. I'm wondering, do you have any plans or have you thought about tracking it over time, somehow?
Bart Veneman: Years ago, projectwallace.com used to be a website where you could log in and then analyze one or more URLs over time. And that was really intensely useful because I had all sorts of graphs with how file size changed and how many colors were changing and that sort of stuff. And that is something I want to do for the CSS selection that I wrote. I want to be able to do comparisons year over year, because that's when it starts to become really useful, if the fundamental metrics are still the same, because otherwise it's just comparing apples to oranges. So that's not going to make sense. So my first goal was to get this one out the door and have something out there. The struggle for this year is going to be how will I compare newer data to the older data?
Brian Kardell: One of the things that jumps out that everybody likes about these reports, it's not just when you write them, it's when I write them, when anybody writes them, is like, the two newslines that you usually find are that there's like one, 90% of websites fall into this category. What's that number? That's a super useful number. And then there's also like, what's the most extreme outlier? So you have a bunch of really extreme outliers in here, and I'm wondering, do you go look at them? Do you actually go, 'Boy, that can't be right. Let me go look at what are they doing?'
Bart Veneman: Oh yeah. So there was one example where I was going through, I think the top 10 websites in terms of CSS shipped to the user, because in my initial analysis, there was a small bug and I reported something incorrectly, and I placed a correction in the report afterwards. So I did my due diligence and checked, did some probing for the top 10 to see what was happening. And there was this insane website for a bank somewhere in Africa that shipped 50 megabytes plus of CSS, and they were an eco bank. The eco was in their brand name. I almost fell off my chair laughing, but also like, 'Oh man, you can't make this up.'
Brian Kardell: I regularly look at the markup data. I have a thing that tracks the markup data monthly in collaboration with the HP archive. And as I track it, even when we wrote the reports, I noticed a bunch of things that you could identify that were outliers. Like, man, this is way too big. Or another thing that I look for, I don't know if you look for or you would find this interesting, maybe, is like things that are parsing errors that are probable typos. It's shocking in the markup landscape how many typos there are. It's just because if you mistype something, it winds up as a span and chances are it's going to be styled just fine, whatever. So yeah, it's really interesting what the typos are and encoding problems and things like that. Do you do any kind of analysis like that? And I've wondered, do you think that people might find it useful, as a service where you could sign up for this and maybe once a month or on demand we go check your site and then you sign up with a webmaster. Remember webmasters? You sign up with the admin of your website, your webmaster, and we send you an email and we go, 'Hey, bud, you have done some really not great things. Maybe you want to go check that out.' I think that would be a really useful service. What do you think?
Bart Veneman: Yeah, I just mentioned that Project Wallace had this thing where you could log in and track your site over time. The goal back then was to indeed have thresholds for number of importance or whatever. But the thing you just said really triggered me, because we tend to look at our CSS on a component level. And when you look at it holistically and everything is out there, there is a special section in the CSS selection that I wrote specifically for all the typos that we have for the use of 'important.' And it was so funny, there's so many wrong ones in there, like improtant and importnt. And you don't see that all the time unless you look at your CSS, like coming into the browser of a user and then looking at it. And having some sort of service that looks at your CSS and that's able to highlight these sort of things, if I had all the time in the world, I would probably do that.
Brian Kardell: Yeah, if I had all the time in the world and a nice startup budget, I think I would build a service that did this for your HTML, your JavaScript, and did checks on what's your loading performance, what's your... In fact, web hosting companies should just offer this. Web hosting companies, somebody sponsor Bart, give them a big endowment to make it happen.
Bart Veneman: Yeah, there are several companies out there, like all these HP tracking services, and there's companies like SonarQube and stuff who keep track on whatever you're doing, but most of them don't seem to really pay a lot of attention to the CSS, which I think is a missed opportunity. I think Project Wallace's packages here could really, really help you out finding weird stuff.
Eric Meyer: Yeah, I'm going to co-sign on someone should give you basically whatever the web's version of a MacArthur Genius Grant is to pursue this for a year or two or three or whatever, making the thing.
Brian Kardell: So one of the things that I thought was interesting in here is like, one of the things that I would think would be a preference such a service would allow you to set, as a webmaster, do you care about this sort of thing, is like, are you shipping comments in your CSS? And you might think, well, who cares? But am I reading your table right in here saying that like 90% of sites are shipping like 11.4K of comments?
Bart Veneman: Yeah, apparently that's what happens. Yeah.
Brian Kardell: That's kind of a lot of comments that you don't need to be sending.
Bart Veneman: Absolutely.
Brian Kardell: But I don't know, because there is also a part of me that thinks... Coming from Eric and I, our era, at least we learned the web from each other, like ViewSource is a super powerful tool. So comments in that respect are actually nice, but they're not necessary. You have one example that, as you say in the thing, there's always an outlier. One website managed to send eight megabytes of comments to their users. Every webmaster should want to know that. If your CSS file is eight megabytes, shouldn't you get like... I don't know. I think every cloud hosting company should... You have made a probably egregious error. You probably didn't want to do this. You can click here to not tell you again, but otherwise we're going to pester you and say, 'This is really probably not what you wanted to do.'
Bart Veneman: Yeah, please show your webmaster license and we're going to revoke it for at least three months until you've learned.
Eric Meyer: CSS police on the way.
Bart Veneman: Yeah, exactly. Well, and I've been struggling with, should I include all these outliers in the CSS selection? Because I don't want to be judgmental, and I've had several people ask me like, 'Who has this very big website, and who ships whatever, comments, whatever?' I don't really care. That's not what this is about. The thing was also meant to be a bit of a lighthearted way to look at the state of things, things I could probably not do in the web almanac, but I can do because this is my own website. Can put a twist on it.
Eric Meyer: Yeah. I noticed very quickly you had a very informal tone in writing the report. You have these percentile charts, which are just very hard data, but then not just in these aside comments, but also in the main text. It reads like just talking to a person rather than, 'This is our formal report that must be written in corporate neutral.' And I appreciated that quite a bit. That's how I try to do my stuff when possible. But yeah, every time I hit one of those outlier boxes, I always mapped it to the spiders meme, like the average number of spiders. So the 'Spiders Georg was an outlier and should not have been included.' So I would like hit this, one website managed to send eight megabytes of comments to the users. It's like, 'Comments Georg was an outlier and should not have been included in the data set.' But I think that those are fine. I also very much support your decision not to name and shame. Just, 'Here's the biggest number that we got in this category. Somebody had over 150,000 at rules on a single page', which almost seems impossible and yet it's in there, and nobody needs the, 'Let's all go and laugh at a specific site or person'. That's not helpful or useful. But does it tell us anything in particular about the state of the web that somebody had eight megabytes of comments? No. There's always going to be these outliers. Either somebody just had a lot of comments or somebody forgot to close a comment, and so shipped an entire style sheet that was just a comment. Who knows? But just by saying, 'Somebody had eight megabytes of comments', it makes people stop and think, 'How do you even get there?' Which reports like this I think should always, to some extent, prompt the reader to think through, how did that happen? How did we get to that point? And ideally they do it for multiple things. They look at maybe one of these percentile charts and say, 'How do we even get into that state as a whole industry, and how could I maybe avoid being the outlier in one of these reports?'
Brian Kardell: Yeah, I think that's the distinction that I was looking for there, too, because I also don't think it's a great idea to publicly name and shame people, because a lot of times these are mistakes, and everybody makes mistakes. I think for me, when I look at some of these, I work on a bunch of websites, and when I see some mistake on there, not now, but in my past, my first reaction was, 'Oh God, I hope it's not me.' Mistakes happen. Some of these are really some silly thing that made it out into production, and maybe it's like concatenating more style sheets into one thing than it needs to, or all kinds of things. So I always thought if we're able to monitor these things, that's why I say it would be really interesting to let people know as a service that those things are happening, because I think everybody wants to do the right thing. It's the same with accessibility. Nobody sets out to make their website inaccessible, but sometimes it's difficult. And if we find ways to make it easy, by offering a service or something like that, I think that could be really helpful.
Bart Veneman: Yeah, everyone has different constraints, being time, simply bandwidth or hardware or whatever. There's always, pretty much always, a reason why these sort of things happen. There's a note somewhere in the CSS selection for a media query being nested 37 levels. I tried to track that one down, and it was just some WordPress plugin where someone could enter custom CSS, and they forgot to close the media query a bunch of times. So yeah, that's where you get mayhem.
Brian Kardell: So one of the things that I really like in CSS is specificity. So I think it's cool that CSS is a rules engine, because we don't have that many rules engines that are just available to programmers. Most of the things that we do are just a lot more imperative and less like, 'Here's a bunch of rules.' And here we have for styling and it's really cool, but the way we do it is specificity. And I think a lot of people don't really even totally understand it. And we also now have lots of tools that let us nest things and put things together. And I've seen at companies I worked for in the past some pretty complex selectors. You have one in here that says there's one website shipping selectors up to... I don't even know what is this number. It's huge. The class is way off the charts. The browser can't even deal with that. It maxes out.
Eric Meyer: Yeah. The way I would read it is 146,1546,159. This is actually why originally it was proposed to use dashes to separate the specificity levels rather than commas, but commas caught on, I guess because the comma key is closer to the space bar. Not sure. Anyway, yeah, because otherwise you see people saying things like, in the notices Bramus Van Damme taught us that specificity only goes up to 255, 255, 255, but it looks like it's 255 million, 255 thousand, 255 because of the comma separation.
Bart Veneman: That's a very good point, actually.
Eric Meyer: So 146, 1546, 159, that means 1,546 class level selector parts. So either class names or attribute selectors or-
Bart Veneman: Pseudo-elements, I think.
Eric Meyer: Pseudo, yeah. Or more likely a massive combination of them. That's just amazing.
Bart Veneman: I looked this one up and it is ridiculous. I wanted to share it, but it's hard to redact because there was a lot of brand-specific class names in there. Because I really wanted to share it.
Eric Meyer: That's what I was just thinking is like, put it up as a guest, and just point to it and be like, 'Here is the selector. Here's how we got there.'
Bart Veneman: But it was also really easy to spot that there was some CSS pre-processing going on there. A lot of the anti-patterns that we see in the CSS selection, or the outliers that we see, are CSS pre-processing anti-patterns like using, what was it? SaaS extends, using that a lot, and that's how you end up with over a thousand selectors to do margin pixels; margin left 10 pixels.
Eric Meyer: That's amazing.
Bart Veneman: That is one from all the websites that I've analyzed in the last couple of years, like the ones I've seen with my own eyes just doing one at a time, not the 100,000 at the same time. That's one of the most common problems. Harry Roberts wrote about it, about SaaS extend is a bad idea, years ago, but this also shows that not a lot of websites are updated very frequently, as you said, Brian, because this happens on so many websites on so many levels.
Brian Kardell: I'm curious, going back to this theme of mistakes happen and could we let people know, did you reach out to them? Did you make an effort to reach out to whoever it was and let them know that they had this problem? Because when we were doing the markup chapter initially, I kept sending back to Rick Viscomi, who was providing the data to me and saying that, 'No, the data still can't be right. Okay, I need URLs. I need actual URLs to see where these are being tracked, because here you have like 10,000 websites having the same exact 72 character typo. It can't be.' And then when I look at them, it was all as a service for car dealerships. And it just so happened that I knew somebody who worked at that company. I knew somebody who knew somebody who worked at that company, and so I was able to reach out to them, and they fixed it so fast, like that day they fixed it for 10,000 websites. And like I say, people don't want to make mistakes, and I don't know that shaming them is the best way to keep them from making mistakes, but letting them know is for sure useful.
Bart Veneman: It's a very compelling idea. And I've thought about this, because one of the outliers I found was a Dutch website, and I'm Dutch myself. So that sort of felt like, well, I could go down that route and try to help them out, but then it's never as easy as one single email, and you have to go through several support channels probably. And I was putting the article live, so I didn't really feel like doing that. And because it's my... It's a pretty big side project, but it is a side project. So yeah, I really wanted my energy to go elsewhere.
Brian Kardell: Yeah. No fault in that.
Eric Meyer: It would be so nice to have a standardized way to communicate those kinds of things; to be able to say, 'Okay, for this website, here's where you send in your report of your markup is borked, or your JavaScript is failing.'
Bart Veneman: Yeah, and CSS is really a problem for us in that way that it is so forgiving, like you make a spelling mistake or a syntax error and the engine just ignores it and tries to move on to the next part that works. So it's so much easier to let CSS mistakes slip by, whereas JavaScript will yell at you for a thousand reasons, and your e-commerce website is broken. No one complains when your CSS is broken unless the whole website is unstyled, which rarely happens, which it does, but not always, not a lot.
Brian Kardell: Or if you can't click the buy button for some reason.
Eric Meyer: Which is usually JavaScript, but not always. Sometimes it is the CSS.
Brian Kardell: Can I lobby you for some future data, maybe? I would be interested, one of the pet projects of mine that's gone on for a decade plus, is a markup. And I'm very curious what custom elements people are using, and looking through where people are styling tags that contain a dash would be an interesting data point for me.
Bart Veneman: Oh, that's a very good one, because right after the CSS selection went out, I added extra analysis to the CSS analyzer to look at host and, what is it? Slotted pseudo-elements, to include those for next year. But that's an interesting point. Yeah, definitely going to do that.
Brian Kardell: And then there is a list of like... There are several lists. So I have a super list of anything that anybody could reasonably have considered a standard tag at one time. There isn't complete agreement on that. There's no spec that says these are the standard tags, and they have changed over time. So anyway, there's something like 140, 150 elements. But when we check in the HP Archive, we have to stop checking after like 10,000 because there are just so many. And lots and lots and lots of them are non-recognized, but dash raised elements.
Bart Veneman: Right.
Brian Kardell: And I can share, actually, my article on that, and maybe just read it and see if it sparks anything in you for other interesting-
Bart Veneman: Oh yeah, please do. Yeah.
Brian Kardell: -ways to look at that data.
Eric Meyer: And I feel like some of the CSS is uniquely vulnerable to runaway outliers because of the heavy use of pre-processors. So just not think about how far down you've nested something, and then the pre-processor expands it out into this massive thing. We have native nested CSS. It's not necessarily universal, but it's supported in, I think, all the browsers. And so we might start seeing this more natively, but I wonder sometimes if the heavy reliance on pre-processors has inflated the CSS that we ship to users more than would've been the case otherwise.
Bart Veneman: Yeah, I think that's the case. For many of the anti-patterns that I spot, pre-processors seem to be, or the incorrect use of a pre-processor, seems to be the cause. Whether it's extends, sometimes it's use of mix-ins, like the same block of declarations and maybe selectors is coming back every time. I hope that I can include the use of CSS mix-ins and functions, like native functions, into the report for next year, but I think it's still being specced, I think, maybe prototype, but not sure.
Brian Kardell: A follow-up to some of the observations that we talked about, about like the biases and how things are constructed, and also this thing about markup and custom elements. And I wonder if analyzing CSS from the homepage is actually probably more valuable in a way than analyzing the actual markup on the homepage, because frequently we'll send CSS for things way deeper in the site, like even things that you have to be logged in for. So we could know more about the kind of markup that you're using deeper in the site by looking at the CSS, probably, than just looking at the markup for the homepage. Is your data public? Is this an open source project? Is there a way for me to plug into this data or...
Bart Veneman: So the analyzer that analyzes the CSS is public. The scraper, the script that downloads the CSS from a given URL, that one's public; it's listed in the article. But the actual data isn't public because I messed up and it's quite a lot. So pretty early in the process of scraping 100,000 sites, I noticed that my hard drive was filling up quite quickly. So I started ignoring some parts of data that I deeply, deeply regret now, because I don't have analysis on what colors are used most often and what is the most used font size and stuff like that. So that's a very, very big mistake on my end. So the data itself isn't public. I've tried to link up the source data to all the graphs wherever useful. So the scraper's public, the analyzer's public, so you could probably reproduce the report.
Brian Kardell: I'm very sure we could get this added as a custom metric to the HP Archive. I know some of the people there. Barry can probably help get it added.
Bart Veneman: Well, as a fun fact, this morning I started discussions with Barry Pollard because I wanted to know, I've seen that the HP archive uses Rework CSS to parse the CSS. And one of my goals, because I was listed as the author for the Web Almanac chapter for last year, 2025, it didn't go through because I thought we didn't have enough data analyzed to write a proper chapter. But so I got in contact with Barry and he gave me some really helpful pointers as to how the HP Archive analyzes CSS. So last night I set out to figure out what parser is being used to analyze the CSS, and it's Rework CSS with some changes in it. So I'm still trying to figure out what exactly, but from what I can tell, it does not handle the more modern CSS stuff, like add layer and container queries really well. So I've raised the question of how useful is the data in the HP Archive at this moment if we miss out on a lot of the details.
Brian Kardell: Yeah. And it's like an evolving thing. So before 2018, the HP Archive just stored strings. It just stored the horror archive string of what was downloaded. And if you wanted to know what elements were used, you would do a RegEx search on Strings, which is expensive and, as you know, not really how parsers work at all. So you would get results that kind of, vaguely look like that, but they're not actually elements or they're not actually rules because they're commented out or whatever. I think in 2018, I helped add a custom metric that plugs into the actual browser's parser, so it actually uses the parsed result to collect that data. And then in 2021, I think... What year did Leah do it? Do you remember Leah and Rachel did it? But yeah-
Eric Meyer: That was 2020.
Brian Kardell: -that year she added those proper parser and custom metrics. There's definitely room for, if there's a better parser that we should use, totally can do it. If we need to add new custom metrics, totally can do it. So yeah, I'd encourage you to keep talking to Barry.
Bart Veneman: Yeah, I still want to help out with writing the Web Almanac CSS chapter, because this whole thing for Project Wallace to write a CSS selection article started out with me, I want this to exist. And that started off with me volunteering for the Web Almanac, because I want this data to be available. I want this to exist in the world. And it's been so long since we had our last edition. So that's where I started off. And I'm still trying to now start discussions for how are we going to get it to work for this year, because I still want the Web Almanac to happen. I'd rather have the Web Almanac than something specifically for Project Wallace because that's fun and I can do funky stuff there with my own tone of voice, which is fun in its own way. But I think the Web Almanac has more authority and means more to more people.
Brian Kardell: It's nice to have a sort of central place where we get together and do it with a certain level of rigor and verifiable methodology year after year. But I just want to say, thank you for doing this regardless of where you did it. It's an amazing amount of work. All of it is volunteer as well. All of the work for Almanac is volunteer or paid, like Igalia paid for us to contribute when we did it. They paid for our time. And I assume Google pays its employees who contribute and so on. But for a lot, a lot of people, this is like volunteer hours. And it's, as you can probably attest, it's not a small amount of hours. So I just want to say thank you for the free labor and the research, sharing it. Very, very cool.
Eric Meyer: 100%, yeah.
Brian Kardell: Is there anything else that you would like from Project Wallace? Are you looking for other people to be involved to help you? Are you looking for ways to create some kind of service out of it? Are you looking for ways to... What's your vision for Project Wallace at this point?
Bart Veneman: The main thing for Project Wallace is for me to scratch my own itch, and that itch usually changes every day. So my goal is actually, to be really honest, is to keep it a side project so that I can work on it whenever I have appetite and energy. That also means that it's mostly me, and I've never tried to look for collaborators to work together on things because that means that I would have to work on Wallace in times when I don't really want to. There's times in the year where I'm very low on energy and I'd just rather go outside and sit in the sun and don't want to be responsible for someone else. So it's a side project. And my goal is to look at CSS from as many different angles as I can. Last year, I figured out how to do... Just one example. I figured out how to collect CSS code coverage from my existing playwright tests that I have for projectwallce.com. So my whole website is covered with automated headless browser testing. So why not collect CSS coverage data while you're doing that and then do an analysis on that. Found several issues with that, but I can now look at my CSS and see exactly which lines of CSS are not covered in any of my tests. I was able to delete some of my CSS because of that as well, which is a thing I don't think many web developers get to do. So yeah, I wanted to know if it's possible, and it is, I know now. So yay, I guess. Yeah, that's what I want to do for Project Wallace. Just look at it from different angles, try different things. Some things are a horrendous failure and never make it to production. Some things are fun, like CSS selection, and people rave about it, and that's very cool.
Brian Kardell: Yeah. One of the things that I wonder about is if there's some missing shared architecture for some of this stuff; infrastructure. I think the search engines have this common crawl that they use, that they share, that is like all of the metadata so that everybody's not crawling the same content. They have it all pre-indexed and all that kind of stuff. I don't know about that very deeply, but I wonder, there seems to be a lot of people who have some itch in this area, and for all of us to write something that scrapes sites and fills up our hard drive and solve all the same problems over and over again, seems like... I would feel a lot less itchy, I think, if we could offload some of the harder parts into some common infrastructure. And I don't know what that looks like, but just a thought if there's... I don't know that that has anything to do with Project Wallace other than would you agree there might be an easier way than you having to actually scrape all these sites?
Bart Veneman: I would be so happy to not let an old MacBook run for days and days on end, rebooting it from time to time, hoping the hard drive won't crash, to do a fun-like article on CSS analysis. It would be really great if that wouldn't have been a worry.
Brian Kardell: Eric, anything else you want to...
Eric Meyer: No, I think the... Well, we could talk for hours, but I think before we go, Bart, where can people find you online and support what you do?
Bart Veneman: I am veneman.dev on BlueSky, and I'm on projectwallace.com, but you don't actually see me there, but do give it a visit.
Brian Kardell: Would you mind spelling your handle for the non-Dutch folks?
Brian Kardell: There you go. Thanks, Bart, for joining us. This was a really enjoyable conversation. I'm glad we got to connect, and keep scratching your itches and sharing when you can, because it's great stuff.