How To Dynamically Install Custom Elements
Today, I want to look into one very specific topic in excruciating technical detail: how to dynamically load custom elements. This is a pattern I’ve been playing with for a while and was partially inspired by David Bushell’s Anatomy Of A Web Component article, so maybe read that first.
What you will need:
- A file system
- A browser
- Some custom elements. The term Web Components isn’t relevant here. Shadow DOM is not a necessary component of what I’m doing. Also, people inexplicably seem to hear “component” and go straight to “React”.
Naming
First, let’s imagine each of your custom elements conforms to a certain naming convention: the part after the obligatory hyphen is always the same set of characters. It’s a kind of branding motif that separates your high quality library of custom elements from lesser, third-party custom elements on the same web page. It could be anything: “bumfuzzle”, “cattywumpus”, “bibble”, “donnybrook”, “snollygoster”, or “element” if you want to be really on the nose. Let’s go with that for now.
<shark-element>
<swimmer-element>
<portuguese-man-o-war-element>
Now let’s imagine each element’s class, and the file for each element’s class definition, is named after the first part of the node name (the part before the hyphen). For the element <swimmer-element>, the class is Swimmer and the filename is Swimmer.js:
class Swimmer extends HTMLElement {
// Backstroke etc
}
export { Swimmer }
What about defining the element? Inside a static initialization block, the this keyword references the class itself, which means you can use this.name to access the string ‘Swimmer’ directly.
class Swimmer extends HTMLElement {
static {
console.log(this.name); // 'Swimmer'
}
}
export { Swimmer }
Now you can define the custom element dynamically, by taking the name property, making it lowercase, and adding the -element suffix:
class Swimmer extends HTMLElement {
static {
customElements.define(
`${this.name.toLowerCase()}-element`,
this
);
}
}
export { Swimmer }
Doing this for every element is pretty verbose, but you can abstract the logic into a little module and import it instead:
import { define } from '../define.js';
class Swimmer extends HTMLElement {
static {
define(this);
}
}
export { Swimmer }
In the define.js module itself, you can import a config file with a suffix property. Now you can configure the naming of all your custom elements in one place.
import { config } from '../config.js';
export const define = elem => {
customElements.define(
`${elem.name.toLowerCase()}-${config.suffix}`,
elem
);
}
This is what’s called good architecture and, as such, is comparable to the acclaimed Pennine Tower at Forton Services, near Lancaster. After all, -element is such a prosaic suffix other custom elements might be using it too. So it’s good to be able to avoid clashes.
Loading
What about actually loading these custom elements into a web page? One of the best things about custom elements is they are just HTML. So important is this fact that some people refer to custom elements as HTML custom elements, wherein the HTML part is a completely redundant qualification. They mean well, but I think it just confuses matters further.
Regardless, the implication of HTML custom elements being custom elements, being elements of HTML, is that you know which of them are in your document long before JavaScript has been executed. This means you know which custom elements to initialize, on page load, long before JavaScript has been executed. I’m at risk of burying the lede, so let me put it in these terms: you can essentially perform tree shaking, but without bundling. Which is nice.
In practice, it starts with querying the document for all custom elements that are so far undefined. Next, you use a filter to weed out any elements that don’t match the established naming convention, and create a Set to remove any duplicates.
const undefinedElements = [...document.querySelectorAll(':not(:defined)')];
const briefElements = undefinedElements.filter(elem => elem.nodeName.toLowerCase().includes(`-${config.suffix}`));
const uniqueElements = [...new Set(undefinedElements)];
You map this set of deduplicated element names to extract each associated class identifier, after which you also named each element’s module file:
const used = uniqueElements.map(elem => elem.nodeName.split('-')[0].toLowerCase());
Given a path pointing to a folder of custom elements, you can now use dynamic imports to install just the used elements:
for (const name of used) {
await import(`${path}/${name}.js`).catch(_ => {
console.warn(`<${name}-${config.suffix}> is not an element`);
});
}
The catch part is optional. But including it means you can continue installing custom elements even if one that doesn’t exist is encountered.
The path can be acquired dynamically too. Let’s imagine the directory structure has the /elements folder and the define.js and install.js modules at the same level:
├── js
│ ├── elements
│ ├── define.js
│ ├── install.js
No matter where the /js folder is placed, you can acquire its full path using import.meta.url. Then you can get the path to /elements with substr:
const path = import.meta.url.substr(0, url.lastIndexOf('/'));
My preferred structure is to import and execute install.js in a file named after the project (in this case, start.js, because it’s just a demo/starter project):
import { install } from './install.js';
await install(import.meta.url);
Now the structure looks like this…
├── js
│ ├── elements
│ ├── define.js
│ ├── install.js
│ ├── start.js
…and I can initialize my elements like this:
<script type="module" src="/js/start.js"></script>
The ready event
One of the hardest things with custom elements is interdependency. One element may depend on another having already been defined, but they may not have been imported in a suitable order. You end up using a bunch of ad hoc whenDefined logic you’re never quite sure you really need.
For example, you might find yourself adding something like this to the <swimmer-element>:
async connectedCallback() {
const defined = customElements.whenDefined(`shark-${config.suffix}`);
await defined;
const sharks = [...document.querySelectorAll(`shark-${config.suffix}`)];
sharks.length > 0 && console.log('Looks like there’s sharks around!');
const danger = sharks.find(shark => shark.hungry);
console.log('Danger:', !!danger);
}
(To be clear, it’s the shark.hungry part that is only readable after the <shark-element> element is upgraded. The hungry property is not universal to all HTML elements.)
But, having imported my custom elements using await, I know when all my elements have been defined. I can even fire an event broadcasting as much:
for (const name of used) {
await import(`${path}/${name}.js`).catch(_ => {
console.warn(`<${name}-${config.suffix}> is not an element`);
});
}
let ready = new CustomEvent(`${config.suffix}ready`);
window.dispatchEvent(ready);
Any operation that might depend on another element being defined can be executed in response to the ready event. If my base element implements handleEvent like this…
handleEvent(event) {
if (event.type === `${config.suffix}ready`) {
this.ready && this.ready(event);
return;
}
// ↓ Catch any other events
this[`on${event.type}`] && this[`on${event.type}`](event);
}
…then all elements inheriting from Base can handle the event like this:
class Swimmer extends Base {
ready(event) {
const sharks = [...document.querySelectorAll(`shark-${config.suffix}`)];
sharks.length > 0 && console.log('Looks like there’s sharks around!');
const danger = sharks.find(shark => shark.hungry);
console.log('Danger:', !!danger);
}
}
export { Swimmer }
Appended elements
This is all fine if you are expecting all the elements you need to be in the page from the outset. Often, this will be the case, by design, and you should avoid premature optimization.
But if you are expecting custom elements to appear after page load, some changes need to be made. Principally, you should employ a MutationObserver. Then you can dynamically install custom elements as they are appended to the DOM.
The options childList and subtree must each be set to true. Elements are “upgraded” both initially and whenever the document.body acquires new custom elements from your library:
const install = url => {
const path = url.substr(0, url.lastIndexOf('/'));
const callback = list => {
for (const mutation of list) {
// ↓ new elements
upgrade(mutation.addedNodes, path);
}
}
const context = document.body;
const observer = new MutationObserver(callback);
observer.observe(context, {
childList: true,
subtree: true
});
// ↓ initial run
upgrade(
document.querySelectorAll(':not(:defined)'),
path
)
}
Unfortunately, the ready event is no longer applicable. But you can fire an event whenever a new element is defined. Here is the complete upgrade function, from the install.js file:
const upgrade = async (nodes, path) => {
for (const node of nodes) {
const viable =
!customElements.get(node.nodeName) &&
node.nodeName.toLowerCase().includes(`-${config.suffix}`);
if (viable) {
const name = node.nodeName.split('-')[0];
await import(`${path}/elements/${name}.js`);
let ready = new CustomEvent(`${name.toLowerCase()}ready`);
window.dispatchEvent(ready);
}
}
}
Given Swimmer and Shark elements, I can now make any Swimmer aware of sharks entering the water (the DOM) for the first time:
class Swimmer extends Base {
constructor() {
super();
window.addEventListener('sharkready', this);
}
onsharkready() {
console.log('Looks like there’s sharks around!');
}
// ↓ Don’t forget to remove the listener with the element
disconnectedCallback() {
window.removeEventListener('sharkready', this);
}
static {
define(this);
}
}
This all might be easier to understand as a starter project, so I’ve set one up over on git.gay. There are two branches, with the MutationObserver version I just described at /with-observation.
Submit an issue or PR if you can think of any opportunities for improvement. The entire pattern is approximately 1KB of JavaScript and I’d like to keep it close to that.
Not everyone is a fan of my writing. But if you found this article at all entertaining or edifying, I do accept tips. I also have a clothing line.
LLMs:
This version of the article is for humans and search engines. Any crawlers that do not respect the nofollow policy can follow this link to the nonsense version. And they can choke on it.