With Google now shipping both Shadow DOM v1 and Custom Elements v1 in Chrome,
and Apple shipping Shadow DOM v1 in Safari, we’ve been upgrading the
Basic Web Components
library from the original v0 specs to v1. Here’s what we learned, in case
you’re facing a similar upgrade of your own components, or just want to
understand some ramifications of the v1 changes.
Upgrading components to Shadow DOM v1: Easy!
Google developer Hayato Ito has a great summary of
What’s New in Shadow DOM v1.
Adapting our components to accommodate most of the changes on that list was
trivial, often just a matter of Find and Replace. The v0 features that were
dropped were ones we had never used (multiple shadow roots, shadow-piercing
CSS combinators) or had avoided (<content select=”...”>), so
their absence in v1 did not present a problem.
One v1 feature that we had heavily lobbied for was the addition of the
slotchange event. The ability of an element to detect changes in its own
distributed content is a critical addition to the spec. We are happy to
replace our old, hacky method of detecting content changes with the new,
official slotchange event. This allows us to easily write components that meet
the
Content Changes
requirement on the Gold Standard checklist for web components.
Upgrading components to Custom Elements v1: Some challenges
The changes from Custom Elements v0 to v1 were more challenging, although some
were easy:
Replacing document.registerElement() with
customElements.define(). No issues.
Drop support for is="" syntax. Ever since Apple announced that
they would not support the syntax, we’ve avoided it. As a workaround, a
while back we created a general wrapper component called
WrappedStandardElement.
Tweaks in lifecycle callback timing. There were spirited spec debates over
the exact points in time when a component’s lifecycle callbacks should be
invoked, but we didn’t notice any practical differences between v0 and v1.
attributeChangedCallback automatically invoked at constructor
time. This was a welcome change that allowed us to simplify our
AttributeMarshalling
mixin, which automatically translates attribute changes into property
updates.
One small obstacle we hit is that a v1 component now needs to declare which
attributes it wants to monitor for changes. This performance optimization in
Custom Elements v1 requires that your component declare an
observedAttributes array to avoid getting
attributeChangedCallback invocations for attributes you don’t
care about. That sounds simple, but in our mixin-based approach to writing
components, it was actually a bit of a pain. Each mixin had to not only
declare the attributes it cared about, but it had to cooperatively construct
the final observedAttributes array. We eventually hit on the idea
of having the aforementioned AttributeMarshalling mixin programmatically
inspect the component class for all custom properties, and automatically
generate an appropriate array of attributes for
observedAttributes. That seems to be working fine.
A more problematic change in v1 is that component initialization is now done
in a class constructor instead of a createdCallback. The change
itself is a desirable one, but we expected it would be tricky, and it was. The
biggest problem we’ve encountered is that the list of
Requirements for custom element constructors
prohibits a new component from setting attributes in its constructor. The
intention, as we understand it, is to mirror standard element behavior.
Calling createElement('div') returns a clean div with no
attributes, so calling createElement('my-custom-element') should
return a clean element too, right?
That sounds good but turns out to be limiting. Custom elements can’t do
everything that native elements can, and sometimes the only way to achieve a
desired result is for a custom element to add an attibute to itself:
A component wants to define default ARIA attributes for accessibility
purposes. For example, our
ListBox
component needs to add role=”listbox” to itself. That helps a
screen reader interpret the component correctly, without the person using
the component having to know about or understand ARIA. That
role attribute is a critical part of a ListBox element, and
needs to be there by default.
A component wants to reflect its state as CSS classes so that component
users can provide state-dependent styling. For example, our
CollapsiblePanel
component wants to let designers style its open and closed appearances by
adding CSS classes that reflect the open/closed state. This component
reflects the current state of its closed property via CSS
classes. It’s reasonable that a component would want to set the initial
state of that closed property in a constructor. But setting
that default value of that property in the constructor will trigger the
update to the CSS class, which is not permitted in Custom Elements v1.
In these cases, it doesn’t seem like it would be hard to just set the
attributes in the connectedCallback instead. In practice, it introduces
complications because a web app author that instantiates a component would
like to be able to immediately make changes to it before adding it to the
document. In the first scenario above, the author might want to adjust the
role attribute:
class ListBox extends HTMLElement {
connectedCallback() {
this.setAttribute('role', 'listbox');
}
}
let listBox = document.createElement('basic-list-box');
listBox.setAttribute('role', 'tabs'); // Set custom role
document.body.appendChild(listBox); // connectedCallback will overwrite role!
Because ListBox can’t apply a default role attribute at
constructor time, its connectedCallback will have to take care to see if a
role has already been set on the component before applying a
default value of role=”listbox”. It’s easy for a developer to
forget such a check. The result will likely be components that belatedly apply
default attributes, stomping on top of attributes that were applied after the
constructor and before the component is added to the document.
Another problem comes up in the second scenario above. The creator of the
component would like to be able to write a property getter/setter that
reflects its state as CSS classes:
let closedSymbol = Symbol('closed');
class CollapsiblePanel extends HTMLElement {
constructor() {
// Set defaults
this.closed = true; // Sets the “class” attribute, so will throw!
}
get closed() {
return this[closedSymbol];
}
set closed(value) {
this[closedSymbol] = value;
this.toggleClass('closed', value);
this.toggleClass('opened', !value);
}
}
Since the above code won’t work, the developer has to take care to defer all
attribute writes (including manipulations of the classList, which updates the
class attribute) to the connectedCallback. To make
that tolerable, we ended up creating
safeAttributes, a set of helper functions that can defer premature calls to
setAttribute() and toggleClass() to the
connectedCallback.
That’s working for now, but it feels like the v1 restrictions on the
constructor are overly limiting. The intention is to ensure that the component
user gets a clean element from createElement() — but if the
resulting element is just going to add attributes to itself in the
connectedCallback, is that element really clean? As soon as the
attribute-less element is added to the document, it will suddenly grow new
attributes. In our opinion, that feels even more surprising than having
createElement() return an element with default attributes.
The current state of Shadow DOM and Custom Elements v1
Overall, we’re excited that we’ve got our components and mixins working in
production Chrome 54, which just shipped last week with support for both
Shadow DOM v1 and Custom Elements v1. The Chrome implementation of the specs
feels solid, and we haven’t hit any bugs.
Shadow DOM v1 is also coming together in Safari, including in Mobile Safari.
At the moment, it feels more like a beta than a production feature — we’ve hit
a number of critical bugs in WebKit that prevent most of our components from
working. Apple’s working through those bugs, and we hope to see WebKit’s
support for Shadow DOM improve soon.
In the meantime, Google has been doing the thankless, herculean task of
upgrading the Shadow DOM and Custom Elements polyfills to the v1 specs. That’s
great to see, because without an answer for older browsers, web components
won’t see wide adoption. At the moment, the v1 polyfills also feel like a
beta, but they’re coming along quickly. As soon as the polyfills are stable
enough, we’re looking forward to making a full release of Basic Web Components
based on the v1 specs.
We’ve rewritten the
component.kitchen
backend server to rip out a popular templating language and replace it with
plain JavaScript functions. Recent language improvements in ES2015 have, in
our opinion, made it a sufficiently capable general-purpose language that
we’ve dropped use of a special-purpose template language. As we began a
rewrite of our site, we were inspired by our recent work
using plain JavaScript functions to create web components
and decided to apply the same philosophy to our backend as well.
We serve up our site using Node and
Express. A popular feature of Express is
that it supports pluggable template languages, called “view engines”. Until
now, we’ve used
Dust.js
as our template language. This has worked okay, and we’ve done it that way for
so long that we’ve rarely questioned the need for a special language to solve
this one problem. But using a template language has some downsides:
The template language (e.g., Dust) is different than the JavaScript the rest
of the Node/Express backend is written in. We write in JavaScript everyday,
but only rarely in the template language. This means we’re constantly forced
to look even simple things up in the template langauge documentation.
The syntax for most template languages is ugly and inconsistent. A template
language’s parser needs a reliable way to identify template directives
you’ve placed inside your content, so there’s a bias towards syntax that’s
very unlikely to appear in normal text. Most template languages end up with
lots of dollar signs, percent signs, curly braces, etc. Every one of these
languages makes different choices, and you can be left trying to remember
when you need one curly brace and when you need two.
Performing work both outside the template (to prepare the data before
pouring it into the template) and within the template (using conditional
template directives, for example) can create an uneasy relationship between
both pieces of code. Template languages have control-flow constructs like
traditional languages, but they can be cumbersome to work with. For example,
a looping construct typically expects to iterate over a simple array. If you
want to, say, filter the array, you need to preprocess your data into a form
directly useful in the template language. This often results in splitting
logic across multiple files.
Why use a special-purpose template language at all? Why not JavaScript? Now
that ES2015 has template literals, we thought we’d try using those as the
basis for a plain JavaScript solution.
Step 1: Replace each template file with a plain JavaScript function
We create a file for each kind of page we serve up. Each file exports a single
function that accepts an Express request object (which contains the HTTP
headers, URL parameters, etc.) and returns a text string containing the
response to send to the client.
// SamplePage.js
module.exports = request =>
`<!DOCTYPE html>
<html>
<head>
<title>Hello, world!</title>
</head>
<body>
You’re looking at a page hosted on ${request.params.hostname}.
</body>
</html>`;
This is a pure function — it has no side effects. It returns a string using a
template literal, splicing in data using the ${...} syntax. As
with all template language syntax, it is ugly. But at least this particular
ugly syntax is now standard JavaScript. You can use the same ugly
syntax throughout your code, instead of different ugly syntaxes for different
parts of your code. JavaScript FTW!
Why use a special-purpose template language at all? Why not JavaScript?
The render function can do whatever you want. If you need to do some
computation — filter an array, etc. — you can do that in plain JavaScript,
then splice the results into the string you return. While you could embed
conditionals in the template literal directly, we prefer to avoid that, as it
quickly gets ugly.
If you want to have a page use a more general template, you can easily do that
too:
// Define a template. It’s just a function that returns a string.
let template = (request, data) =>
`<!DOCTYPE html>
<html>
<head>
<title>${data.title}</title>
</head>
<body>
${data.content}
</body>
</html>`;
// Create a page that uses the template.
module.exports = request => template(request, {
title: `Hello, world!`,
content: `You’re looking at a page hosted on ${request.params.hostname}.`
});
Since a render function often needs to do asynchronous work, we allow a render
function to return either a string or a Promise for a string.
Step 2: Map Express routes to render functions
We create a simple mapping of routes to the functions that handle those
routes. Since a render function’s file exports only that function, we can
reference it with a require() statement:
Step 3: When a request comes in, invoke the render function
We wire up our Express routes such that, when a request comes in matching a
given route, the corresponding render function is invoked. The result of that
function is resolved and returned as the request’s response.
// Map routes to render functions.
for (let path in routes) {
let renderFunction = routes[path];
app.get(path, (request, response) => {
// Render the request as a string or promise for a string.
let result = renderFunction(request);
// If the result's not already a promise, cast it to a promise.
Promise.resolve(result)
.then(content => {
// Return the rendered content as the response.
response.set('Content-Type', inferContentType(content));
response.send(content);
});
});
}
Step 4: Set the outgoing Content-Type
Nearly all our routes respond with HTML, but we have a small number of routes
that return XML, JSON, or plain text. We could have a render function return
multiple values, including an indication of the desired Content-Type. But our
simple site serves up such a small number of content types that we can
reliably infer the content type from the start of the response string.
// Given textual content to return, infer its Content-Type.
function inferContentType(content) {
if (content.startsWith('<!DOCTYPE html>')) {
return 'text/html';
} else if (content.startsWith('<?xml')) {
return 'text/xml';
} else if (content.startsWith('{')) {
return 'application/json';
} else {
return 'text/plain';
}
}
That’s it. We end up with a small set of JavaScript files, one for each kind
of page we serve up. Each file defines a single render function, and each
function is typically quite simple. In our opinion, our code has gotten easier
to read and reason about. It’s also closer to the metal — we have ripped out a
substantial, mysterious template language layer — so there are fewer
surprises, and we don’t have to keep looking up template language tricks in
the documentation or on StackOverflow.
Although domain-specific template languages like Dust look very efficient,
over time we accumulated a non-trivial amount of JavaScript to get everything
into a form Dust could process. Now that we’re just using JavaScript
everywhere, we have much less page-generation code than we did
before, and the new code is completely consistent with the rest of our code
base.
What if you want to create a web component that extends the behavior of a
standard HTML element like a link? An early draft of the Custom Elements
specification allowed you to do this with a special syntax, but the fate of
that syntax is in doubt. We’ve been trying to create custom variations of
standard elements without that support, and wanted to share our
progress. Our results are mixed: more positive than we expected, but with some
downsides.
Why would you want to extend a standard HTML element?
Perhaps there’s a standard element does almost everything you want,
but you want it to give it custom properties, methods, or behavior.
Interactive elements like links, buttons, and various forms of input are
common examples.
Suppose you want a custom anchor element that knows when it’s pointing to the
page the user is currently looking at. Such a situation often comes up in
navigation elements like site headers and app toolbars. On our own site, for
example, we have a header with some links at the top to our
Tutorial and
About Us pages. If the user’s
currently on the About Us page, we want to highlight the About Us link so the
user can confirm their location:
While such highlighting is easy enough to arrange through link styling and
dynamically choosing CSS classes in page templates, it seems weird that a link
can’t just handle this highlighting itself. The link should be able to just
combine the information it already has access to — its own destination, and
the address of the current page — and determine for itself whether to apply
highlighting.
We recently released a simple component called
basic-current-anchor
that does this. We did this partly because it’s a modestly useful component,
and also because it’s a reasonable testing ground for ways to extend the
behavior of a standard element like an anchor.
What’s the best way to implement a component that extends a standard element?
Option 1: Recreating a standard element from scratch (Bad idea)
Creating an anchor element completely from scratch turns out to be ferociously
complicated. You’d think you could just apply some styling to make an element
blue and underlined, define an href attribute/property, and then
open the indicated location when the user clicks. But there’s far more to an
anchor element than that. A sample of the problems you’ll face:
The result of clicking the link depends on which modifier keys the user is
pressing when they click. They may want to open the link in a new tab or
window, and the key they usually press to accomplish that varies by browser
and operating system.
You’ll need to do work to handle the keyboard.
Standard links can change their color if the user has visited the
destination page. That knowledge of browser history is not available to you
through a DOM API, so your custom anchor element won’t know which color to
display.
When you hover over a standard <a> element, the browser
generally shows the link destination in a status bar. But there is
no way to set the status bar text in JavaScript. That’s probably a
good thing! It would be annoying for sites to change the status bar for
nefarious purposes. But even with a solid justification for doing so, your
custom anchor element has no way to show text in the status bar.
Right-clicking or long-tapping a standard link produces a context menu that
includes link-specific commands like “Copy Address”. Again, this is a
browser feature to which you have no access in JavaScript, so your custom
anchor element can’t offer these commands.
A standard anchor element has a number of accessibility features that are
used by users with screen readers and other assistive techologies. While you
can work around the problem to some extent with ARIA, there are numerous
gaps in implementing accessibilty
completely from scratch.
Given this (likely incomplete) litany of problems, we view this option as a
non-starter, and would strongly advise others to not go down this road. It’s a
terrible, terrible idea.
Option 2: Hope/wait for is=”” syntax to be supported
The original Custom Elements spec called for an extends option
for document.registerElement() to indicate the tag of a standard
element you wanted to extend:
Having done that, you could then create your custom variant of the standard
element by using the standard tag, and then adding an is
attribute indicating the name of your element.
However, at a W3C committee meeting in January, Apple indicated that they felt
like this feature would likely generate many subtle problems. They do not want
such problems to jeopardize the success of Custom Elements v1.0, and have
argued that it should be excluded from the Custom Elements specification for
now. Google and others would like to see this feature remain. But without
unanimous support, the feature’s future is unclear, and we’re reluctant to
depend on it.
Option 3: Use the Shadow DOM polyfill just for elements with
is attributes
The
web component polyfills
already support the is="" syntax, so in theory you could keep
using the polyfill even in browsers where native Shadow DOM is available. But
that feels weird for a couple of reasons. First, the polyfill won’t load if
native Shadow DOM is available, so you’d have to subvert that behavior. You’d
have to keep just enough of the polyfill alive to handle just custom element
instances using the is="" syntax. That doesn’t sound like fun.
And, second, if is="" isn’t offically endorsed by all the
browsers, it’s future is somewhat uncertain, so it’s seems somewhat risky to
invest in it.
You could also try to manually reproduce what the Shadow DOM polyfill is
doing, but that seems like an even worse answer. Your approach won’t be
standard even in name, and so you’ll create a burden for people that want to
use your component.
Option 4: Wrap a standard element
Since we think it’s inadvisable to recreate standard elements from scratch
(option 1 above), and are nervous about depending on a standard syntax in the
near future (options 2 and 3), we want to explore other options under our
control. The most straightforward alternative seems to be wrapping a standard
element. The general idea is to create a custom element that exposes the same
API as a standard element, but delegates all the work to a real instance of a
standard element sitting inside the custom element’s Shadow DOM subtree. This
sort of works, but with some important caveats.
The process of wrapping a standard element is consistent enough across all
standard element types that we can try to find a general solution. We’ve made
our initial implementation available in the latest v0.7.3 release of
Basic Web Components, in the form of a new base class called
WrappedStandardElement. This component serves both as a base class for wrapped standard elements,
and a class factory that generates such wrappers.
We’ve used this facility to refactor an existing component called
basic-autosize-textarea
(which wraps a standard textarea), and deliver a new component,
basic-current-anchor. The latter wraps a standard anchor element to deliver the feature discussed
above: the anchor marks itself as current if it points to the current page.
You can view a simple
demo.
The definition of basic-current-anchor wraps a standard anchor like this:
// Wrap a standard anchor element.
class CurrentAnchor extends WrappedStandardElement.wrap('a') {
// Override the href property so we can do work when it changes.
get href() {
// We don't do any custom work here, but need to provide a getter so that
// the setter below doesn't obscure the base getter.
return super.href;
}
set href(value) {
super.href = value;
/* Do custom work here */
}
}
document.registerElement('basic-current-anchor', CurrentAnchor);
The WrappedStandardElement.wrap('a') returns a new class that
does several things:
The class’ createdCallback creates a Shadow DOM subtree that
contains an instance of the standard element being wrapped. A runtime
instance of <basic-current-anchor> will look like this:
Note that the inner <a> includes a
<slot> element. This will render any content inside the
<basic-current-anchor> inside the standard
<a> element, which is what we want.
All getter/setter properties in the API of the wrapped standard class are
defined on the outer wrapper class and forwarded to the inner inner
<a> element. Here, CurrentAnchor will end up exposing
HTMLAnchorElement properties like href and forwarding those to
the inner anchor. Such forwarded properties can be overridden, as shown
above, to augment the standard behavior with custom behavior. Our
CurrentAnchor class overrides href above so that, if the
href is changed at runtime, the link updates its own visual
appearance.
Certain events defined by standard elements will be re-raised across the
Shadow DOM boundary. The Shadow DOM spec defines a list of
events that will not bubble up across a Shadow DOM boundary. For example, if you wrap a standard <textarea>, the
change event on the textarea will not bubble up
outside the custom element wrapper. That’s an issue for components like
basic-autosize-textarea. Since Shadow DOM normally swallows change inside a shadow
subtree, someone using basic-autosize-textarea wouldn’t be able to listen to
change events coming from the inner textarea. To fix that,
WrappedStandardElement automatically wires up event listeners for such
events on the inner standard element. When those events happen, the custom
element will re-raise those events in the light DOM world. This lets users
of basic-autosize-textarea listen to change events as expected.
Because this approach uses a real instance of the standard element in
question, many aspects of the standard element’s behavior work as normal for
free. For example, an instance of <basic-current-anchor>
will exhibit all the appearance and behavior of a standard
<a> described above. That includes mouse behavior, status
bar behavior, keyboard behavior, accessibility behavior, etc. That’s a huge
relief!
But this approach has one significant limitation: styling. Because our custom
element isn’t called “a”, CSS rules that apply to a elements will
no longer work. Link pseudo classes like :visited won’t work
either. Worse, because there’s essentially no meaningful standard styling
solution for web components that works across the polyfilled browsers, it’s
not clear how to provide a good styling solution.
Things will become a little easier when CSS Variables are implemented
everywhere, but even that is a sub-optimal solution to styling a wrapped
standard element. For one thing, you would need to separately define new CSS
variables for every attribute someone might want to style. That
includes inventing variables to replace standard CSS pseudo-classes. Next,
someone using your wrapped element would need to duplicate all the styling
rules to use both the standard attributes and your custom CSS variables. That
mess gets worse with each wrapped standard element added to a project, since
each will likely to define different (or, worse, conflicting) variable names.
For the time being, we’re trying a different solution, which is to define the
interesting CSS attributes on a wrapped element using the CSS
inherit value. E.g., a <basic-current-anchor>
element currently has internal styling for the inner standard anchor that
effectively does this:
<style>
a {
color: inherit;
text-decoration: inherit;
}
</style>
What that means is that the inner anchor will have no color or text
decoration (underline) by default. Instead, it will pick up whatever
color or text-decoration is applied to the outer
custom element. That’s fairly close to what we want, but still not ideal. If
someone neglects to specify a color, for example, they’ll end up
with links that are (most likely) black instead of the expected blue.
In practice, we may be able to live with that. The typical use case for our
basic-current-anchor component, for example, is in navigation elements like
toolbars, where web applications nearly always provide custom link styling
that overrides the standard colors anyway. That said, styling represents a
significant complication in this wrapping approach, and should be carefully
considered if trying this.
Wrapping up
It would obviously be preferable for the Custom Elements specification to
address the extension of standard elements when that becomes possible. But
we’re pragmatic, and would rather see Custom Elements v1.0 ship without
is="" support if that means it comes sooner — as long as the
problem is eventually solved correctly. Until then, wrapping a standard
element may provide a stopgap solution to create a custom element extending
standard behavior. It’s not ideal, but may be sufficient for common cases.
This is a complex area, and we could easily be overlooking things in our
analysis. If you have thoughts on this topic, or know of an issue not
discussed here, please give us a heads up!
An interesting point of backward compatibility came up as we were recently
porting our own Component Kitchen
site to
plain JavaScript web components. The main goal of the port was to be able to write our own site in plain
ES6, with less abstraction between us and the platform. We’ve also reaped some
other benefits: our site is now simpler to build, and much faster to load on
older polyfilled browsers like Apple Safari and Internet Explorer.
But the interesting bit was that we could use web components as a transitional
strategy for our old code. The new components and the old components were
written in a completely different way, but could nevertheless coexist on the
page during the transition. All web components connect to the outside page
through the same means: DOM attributes, DOM children, DOM events, as well as
JavaScript properties and methods. So we could leave an old component in place
while we changed the outer page, or vice versa, without having to rewrite
everything at once.
When we recently spoke on an
Web Platform Podcast episode, we spoke with panelist
Leon Revill, who has raised this
point of web components as a backward compatibility strategy. We think this is
as a seriously underappreciated benefit of writing and using web components.
Which framework from three years ago would you prefer to use today if you were
starting a new project?
The web development industry is a highly chaotic, substantially fractured, and
quickly evolving marketplace of competing approaches to writing apps. Even if
you have the luxury of developing in an approach you think is absolutely
perfect for 2016, the chances are probably very low that you will still want
to write your app that way in 2019. If you don’t believe that, ask yourself:
which framework from three years ago would you prefer to use today if
you were starting a new project?
If you’re working on something of lasting value, in three years time, you’ll
still be forced to reckon with some of your old code from 2016. Unless you’re
lavishly funded or insulated from the market, you probably won’t be able to
afford to always move all your old code to whatever you decide is the latest
and greatest way to write software. You’ll be forced to maintain code written
in different eras, and that can be very tricky.
A web component provides a useful encapsulation boundary that can help keep
old front-end user interface code usefully running directly alongside new
code. While a variety of contemporary web frameworks offer proprietary
component models, they can only offer backward compatibility to the extent
that you’re willing to keep writing your whole app in that framework
indefinitely. By virtue of being a web standard, the value of web components
you write today should be able to be preserved for a very long time.
Bonus: During our port, we were able to bring our popular
Web Components Tutorial
up to date. If you know people who would be interested in learning about web
components, just point them at the newly-updated tutorial.
As discussed in this blog over the past few months, we’ve been plotting a
strategy for creating web components using a library of plain JavaScript
mixins instead of a monolithic component framework. We’ve just published a new
0.7 release of the
basic-web-components
project that represents a transition to this mixin strategy. So far, this
approach appears to be working well, and meeting our expectations.
What’s changed?
We’ve begun rewriting all our components in ES6.
So far, we’ve rewritten the
basic-autosize-textarea,
basic-carousel, and
basic-list-box
components in ES6. We transpile the ES6 source to ES5 using Babel.
Developers wanting to incorporate the components into ES6 applications can
consume the original source, while devs working in ES5 can still easily
incorporate these components into their applications.
We have restructured the way we distribute these components to use npm 3
instead of Bower.
The primary basic-web-components repository is now a monorepo: a single
repository used to manage multiple packages separately registered with npm.
This is much, much easier for us to maintain than our prior arrangement, in
which Bower had forced us to maintain a constellation of separate
repositories for our Bower packages. Using npm for web component
distribution will likely bring its own challenges, but we’re confident the
much larger npm community will address those issues over time.
Because we are just using JavaScript now, component files can be included
with regular script tags instead of HTML Imports.
That erases any concerns about cross-browser support for HTML Imports, and
generally simplifies including these web components in an existing
application build process. For example, instead of requiring use of a
specialized tool like Vulcanize, developers can incorporate Basic Web
Components into their applications using more popular tools like Browserify
and WebPack.
We are now offering a library of web component JavaScript mixins.
See this
blog post
for some background on that strategy. Mixins
take the form of functions
that can be applied to any component class without requiring a common
runtime or framework. These mixins are collected in a new package,
basic-component-mixins. See that package for details, including documentation our initial set of
25 web component mixins. We believe this arrangement will make it much
easier for people to adopt key features of the Basic Web Components in their
own components.
We can no longer use Polymer’s Shady DOM to emulate Shadow DOM on older
browsers, so anyone targeting browsers other than Google Chrome must include
the full webcomponents.js polyfill. However, all four browser vendors are
racing to implement native Shadow DOM v1, and it seems likely we will see
all of them deliver native supoprt later this year. While using the full
polyfill incurs a performance penalty today, we are very happy to be writing
code that is squarely aimed at the future.
We are left for the time being without a simple way to let developers style
our components. Polymer provided a plausible styling solution, although it’s
based on CSS Variables (not in all browsers yet), and relies on proprietary
extensions to CSS Variables (non-standard; unlikely to appear in any browser
soon). So styling remains an issue for us — but then again, it’s currently
an unsolved problem for web components generally.
Overall, for our project, we think the advantages of writing in plain
JavaScript outweight any disadvantages. We’re very happy to be able to write
highly functional web components without having to use a monolithic framework
and an accompanying required runtime. And so far, our mixin strategy is
letting us maintain an elegant factoring of our component code, while avoiding
the limitations of a single-inheritance class hierarchy.
That said, we think frameworks are certainly an appropriate tool for many
teams. For certain projects, we enjoy working in frameworks such as Polymer
and React. One of the tremendous advantages of web components is that they’re
a standard. That lets us write our components in the way that makes
most sense for us, while still allowing anyone to incorporate those components
into applications written in other ways. In particular, Polymer remains the
most active web component framework, so interop with Polymer is a critical
feature for all our components. As a simple demonstration, we’ve posted a
carousel-with-tabs
example showing use of our basic-carousel component with Polymer’s paper-tabs
component.
We were intrigued by this approach. As we’ve blogged about before, we’re
really only interested in coding approaches that can be shared with other
people. This functional approach would allow us to lower the barrier to
adopting a given mixin. As much as we like the Composable class discussed in
that earlier post, using mixins that way requires adoption of that class. It’s
not quite a framework — it’s more of a kernel for a framework — but it’s still
a bit of shared library code that must be included to use that style of mixin.
Here’s an example of a mixin class using that Composable approach. This
creates a subclass of HTMLElement that incorporates a TemplateStamping mixin.
That mixin will take care of stamping a `template` property into a Shadow DOM
shadow tree in the element’s `createdCallback`.
import Composable from 'Composable'
import TemplateStamping from 'TemplateStamping';
class MyElement extends Composable.compose(HTMLElement, TemplateStamping) {
get template() {
return Hello, world.;
}
}
That’s pretty clean — but notice that we had to `import` two things: the
Composable helper class, and the TemplateStamping mixin class.
The functional approach implements the mixin as a function that applies the
desired functionality. The mixin is self-applying, so we don’t need a helper
like Composable above. The example becomes:
import TemplateStamping from 'TemplateStamping';
class MyElement extends TemplateStamping(HTMLElement) {
get template() {
return Hello, world.;
}
}
That’s even cleaner. At this point, we don’t even really have a framework per
se. Instead we have a convention for building components from mixin functions.
The nice thing about that is that such a mixin could conceivably be used with
custom elements created by other frameworks. Interoperability isn’t
guaranteed, but the possibility exists.
We like this so much that we’ve changed out nascent
core-component-mixins
project to use mixin functions. Because there’s so little involved in adopting
this sort of mixin, there’s a greater chance it will find use, even among
projects that write (or claim to write)
web components in plain javascript. Again, that should accelerate adoption.
The most significant cost we
discussed in making this change
is that a mixin author needs to write their mixin methods and properties to
allow for composition with a base class. The Composable class had provided
automatic composition of methods and properties along the prototype chain
according to a set of rules. In a mixin function, that work needs to be done
manually by the mixin author.
We’ve identified a series of
composition rules
that capture our thinking on how best to write a mixin function that can
safely applied to arbitrary base classes. The rules are straightforward, but
do need to be learned and applied. That said, only the authors of a
mixin need to understand those, and that’s a relatively small set of people.
Most people will just need to know how to use a mixin — something
that’s now as easy as calling a function.