Dynamic Datalist: Autocomplete from an API :: Aaron Gustafson
Great minds think alike! I have a very similar HTML web component on the front page of The Session called input-autosuggest.
Great minds think alike! I have a very similar HTML web component on the front page of The Session called input-autosuggest.
Eric Meyer and Brian Kardell chat with Jay Hoffmann and Jeremy Keith about Shadow DOM’s backstory and long origins
I enjoyed this chat, and it wasn’t just about Shadow DOM; it was about the history of chasing the dream of encapsulation on the web.
Safari, Chrome, and Edge all allow you to install websites as though they’re apps.
On mobile Safari, this is done with the “Add to home screen” option that’s buried deep in the “share” menu, making it all but useless.
On the desktop, this is “Add to dock” in Safari, or “Install” in Chrome or Edge.
Firefox doesn’t offer this functionality, which as a shame. Firefox is my browser of choice but they decided a while back to completely abandon progressive web apps (though they might reverse that decision soon).
Anyway, being able to install websites as apps is fantastic! I’ve got a number of these “apps” in my dock: Mastodon, Bluesky, Instagram, The Session, Google Calendar, Google Meet. They all behave just like native apps. I can’t even tell which browser I used to initially install them.
If you’d like to prompt users to install your website as an app, there’s not much you can do other than show them how to do it. But that might be about to change…
I’ve been eagerly watching the proposal for a Web Install API. This would allow authors to put a button on a page that, when clicked, would trigger the installation process (the user would still need to confirm this, of course).
Right now it’s a JavaScript API called navigator.install, but there’s talk of having a declarative version too. Personally, I think this would be an ideal job for an invoker command. Making a whole new install element seems ludicrously over-engineered to me when button invoketarget="share" is right there.
Microsoft recently announced that they’d be testing the JavaScript API in an origin trial. I immediately signed up The Session for the trial. Then I updated the site to output the appropriate HTTP header.
You still need to mess around in the browser configs to test this locally. Go to edge://flags or chrome://flags/ and search for ‘Web App Installation API’, enable it and restart.
I’m now using this API on the homepage of The Session. Unsurprisingly, I’ve wrapped up the functionality into an HTML web component that I call button-install.
Here’s the code. You use it like this:
<button-install>
<button>Install the app</button>
</button-install>
Use whatever text you like inside the button.
I wasn’t sure whether to keep the button element in the regular DOM or generate it in the Shadow DOM of the custom element. Seeing as the button requires JavaScript to do anything, the Shadow DOM option would make sense. As Tess put it, Shadow DOM is for hiding your shame—the bits of your interface that depend on JavaScript.
In the end I decided to stick with a regular button element within the custom element, but I take steps to remove it when it’s not necessary.
There’s a potential issue in having an element that could self-destruct if the browser doesn’t cut the mustard. There might be a flash of seeing the button before it gets removed. That could even cause a nasty layout shift.
So far I haven’t seen this problem myself but I should probably use something like Scott’s CSS in reverse: fade in the button with a little delay (during which time the button might end up getting removed anyway).
My connectedCallback method starts by finding the button nested in the custom element:
class ButtonInstall extends HTMLElement {
connectedCallback () {
this.button = this.querySelector('button');
…
}
customElements.define('button-install', ButtonInstall);
If the navigator.install method doesn’t exist, remove the button.
if (!navigator.install) {
this.button.remove();
return;
}
If the current display-mode is standalone, then the site has already been installed, so remove the button.
if (window.matchMedia('(display-mode: standalone)').matches) {
this.button.remove();
return;
}
As an extra measure, I could also use the display-mode media query in CSS to hide the button:
@media (display-mode: standalone) {
button-install button {
display: none;
}
}
If the button has survived these tests, I can wire it up to the navigator.install method:
this.button.addEventListener('click', async (ev) => {
await navigator.install();
});
That’s all I’m doing for now. I’m not doing any try/catch stuff to handle all the permutations of what might happen next. I just hand it over to the browser from there.
Feel free to use this code if you want. Adjust the code as needed. If your manifest file says display: fullscreen you’ll need to change the test in the JavaScript accordingly.
Oh, and make sure your site already has a manifest file that has an id field in it. That’s required for navigator.install to work.
An excellent example of an HTML web component from Eric:
Extend HTML to do things automatically!
He layers on the functionality and styling, considering potential gotchas at every stage. This is resilient web design in action.
If you’re a front-end developer and you don’t read Chris Ferdinandi’s blog, you should change that right now. Add that RSS feed to your feed reader of choice!
Lately he’s been posting about some of the thinking behind his Kelp UI library. That includes some great nuggets of wisdom around HTML web components.
First of all, he pointed out that web components don’t need a constructor(). This was news to me. I thought custom elements had to include this incantation at the start:
constructor () {
super();
}
But it turns that if all you’re doing is calling super(), you can omit the whole thing and it’ll be done for you.
I immediately refactored all the web components I’m using on The Session. While I was at it, I implemented Chris’s bulletproof web component loading.
Now technically, I don’t need to do this. I’m linking to my JavaScript at the bottom of every page so I know it’s going to load after the HTML. But I don’t like having that assumption baked into my code.
For any of my custom elements that reference other elements in the DOM—using, say, document.querySelector()—I updated the connectedCallback() method to use Chris’s technique.
It turned out that there weren’t that many of my custom elements that were doing that. Because HTML web components are wrapped around existing markup, the contents of the custom element are usually what matters (rather than other elements on the same page).
I guess that’s another unexpected benefit to HTML web components. Because they’ve already got their own bit of DOM inside them, you don’t need to worry about when you load your markup and when you load your JavaScript.
And no faffing about with the dark arts of the Shadow DOM either.
A UI library for people who love HTML, powered by modern CSS and Web Components.
I’m obviously biased, but I like the sound of what Chris is doing to create a library of HTML web components.
dialog, details, datalist, progress, optgroup, and more:
If this article helps just a single developer avoid an unnecessary Javascript dependency, I’ll be happy. Native HTML can handle plenty of features that people typically jump straight to JS for (or otherwise over-complicate).
This is very nice HTML web component by Miriam, progressively enhancing an ordered list of audio elements.
Every UI control you roll yourself is a liability. You have to design it, test it, ship it, document it, debug it, maintain it — the list goes on.
It makes you wonder why we insist on rolling (or styling) our own common UI controls so often. Perhaps we’d be better off asking: What are the fewest amount of components we have to build to deliver value to our users?
It’s great to see the evolution of HTML happening in response to real use-cases—the turbo-charging of the select element just gets better and better!
The Session has been online for over 20 years. When you maintain a site for that long, you don’t want to be relying on third parties—it’s only a matter of time until they’re no longer around.
Some third party APIs are unavoidable. The Session has maps for sessions and other events. When people add a new entry, they provide the address but then I need to get the latitude and longitude. So I have to use a third-party geocoding API.
My code is like a lesson in paranoia: I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.
Things are better on the client side. I’m using other people’s JavaScript libraries—like the brilliant abcjs—but at least I can self-host them.
I’m using Leaflet for embedding maps. It’s a great little library built on top of Open Street Map data.
A little while back I linked to a new project called OpenFreeMap. It’s a mapping provider where you even have the option of hosting the tiles yourself!
For now, I’m not self-hosting my map tiles (yet!), but I did want to switch to OpenFreeMap’s tiles. They’re vector-based rather than bitmap, so they’re lovely and crisp.
But there’s an issue.
I can use OpenFreeMap with Leaflet, but to do that I also have to use the MapLibre GL library. But whereas Leaflet is 148K of JavaScript, MapLibre GL is 800K! Yowzers!
That’s mahoosive by the standards of The Session’s performance budget. I’m not sure the loveliness of the vector maps is worth increasing the JavaScript payload by so much.
But this doesn’t have to be an either/or decision. I can use progressive enhancement to get the best of both worlds.
If you land straight on a map page on The Session for the first time, you’ll get the old-fashioned bitmap map tiles. There’s no MapLibre code.
But if you browse around The Session and then arrive on a map page, you’ll get the lovely vector maps.
Here’s what’s happening…
The maps are embedded using an HTML web component called embed-map. The fallback is a static image between the opening and closing tags. The web component then loads up Leaflet.
Here’s where the enhancement comes in. When the web component is initiated (in its connectedCallback method), it uses the Cache API to see if MapLibre has been stored in a cache. If it has, it loads that library:
caches.match('/path/to/maplibre-gl.js')
.then( responseFromCache => {
if (responseFromCache) {
// load maplibre-gl.js
}
});
Then when it comes to drawing the map, I can check for the existence of the maplibreGL object. If it exists, I can use OpenFreeMap tiles. Otherwise I use the old Leaflet tiles.
But how does the MapLibre library end up in a cache? That’s thanks to the service worker script.
During the service worker’s install event, I give it a list of static files to cache: CSS, JavaScript, and so on. That includes third-party libraries like abcjs, Leaflet, and now MapLibre GL.
Crucially this caching happens off the main thread. It happens in the background and it won’t slow down the loading of whatever page is currently being displayed.
That’s it. If the service worker installation works as planned, you’ll get the nice new vector maps. If anything goes wrong, you’ll get the older version.
By the way, it’s always a good idea to use a service worker and the Cache API to store your JavaScript files. As you know, JavaScript is unduly expensive to performance; not only does the JavaScript file have to be downloaded, it then has to be parsed and compiled. But JavaScript stored in a cache during a service worker’s install event is already parsed and compiled.
I hold this truth to be self-evident: the larger the abstraction layer a web developer uses on top of web standards, the shorter the shelf life of their codebase becomes, and the more they will feel the churn.
So what are the advantages of the Custom Elements API if you’re not going to use the Shadow DOM alongside it?
- Obvious Markup
- Instantiation is More Consistent
- They’re Progressive Enhancement Friendly
Straightforward smart sensible advice that you can apply to any feature on a website.
Trys describes exactly the situation where you really do need to use the Shadow DOM in a web component—as opposed to just sticking to HTML web components—, and that’s when the component is going to be distributed and you have no idea where:
This component needed to be incredibly portable, looking great on any third-party website, in any position, at any viewport, with any amount of content. It had to be a “hyper-responsive” component.
“And so what we did is we started looking at, internally, all of the places where we’re using web technology — so all of our internal web UIs — and realized that they were just really unacceptably slow.”
Why were they slow? The answer: React.
“We realized that our performance, especially on low-end machines, was really terrible — and that was because we had adopted this React framework, and we had used React in probably one of the worst ways possible.”
React has become a bloated carcass of false promises, misleading claims, and unending layers of backwards compatibility – the wrong kind of backwards compatibility, as they still occasionally break your fucking code when updating.
Pretty much anything else is a better tool for pretty much any web development task.
This is an interesting thought from Scott: using Shadow DOM in HTML web components but only as a way of providing sort-of user-agent styles:
providing some default, low-specificity styles for our slotted light-dom HTML elements while allowing them to be easily overridden.
Three great examples of HTML web components:
What I hope is that you now have the same sort of epiphany that I had when reading Jeremy Keith’s post: HTML Web Components are an HTML-first feature.