December 7, 2015
We think it’s generally necessary to use
some sort of framework to develop web components, but that framework may not have to be monolithic in nature. Instead, the
framework might be built entirely as mixins on top of a kernel that enables
mixin composition. Rather than invoking a framework’s class constructor, one would
simply compose the desired mixins together to create an instantiable web
component.
We’ve been prototyping a completely mixin-oriented approach to component
development in a project called
core-component-mixins.
-
This relies on the
Composable
facility as the kernel to compose mixins in JavaScript. An alternative mixin
strategy could be used as long it retained the same general degree of
expressiveness. It would be ideal if multiple web component frameworks could
agree on a mixin architecture so that we could share some of these mixins.
We’d be happy to use a different mixin strategy in order to
collaborate with more people.
-
The repo’s /src folder shows a core set of component mixins for
template stamping, basic attribute marshaling, and Polymer-style automatic
node finding. For example, the TemplateStamping mixin will add a
createdCallback that creates a shadow root and clones into it the value of
the component’s template property:
import TemplateStamping from 'core-component-mixins/src/TemplateStamping';
class MyElement extends Composable.compose(HTMLElement, TemplateStamping) {
get template() {
return <style> :host { font-weight: bold; } </style> Hello, world.
;
}
}
Use of the TemplateStamping mixin takes care of details like shimming any
<style>
elements found in the template when running under
the Shadow DOM polyfill.
-
That /src folder contains a sample ReactiveElement base class that pre-mixes
the three core mixins mentioned above to create a reasonable starting point
for custom elements. The above example becomes:
import ReactiveElement from 'core-component-mixins/src/ReactiveElement';
class MyElement extends ReactiveElement {
get template() {
return <style> :host { font-weight: bold; } </style> Hello, world.
;
}
}
Use of the ReactiveElement class is entirely optional — you could just
as easily create your own base class using the same mixins.
-
The /demo folder shows some examples of components created with this
mixin-based framework. such as
Hello World
example.
-
A demo of a
hypothetical X-Tag implementation
shows how a framework can use mixins to create its own custom element base
class. In that demo, the hypothetical framework adds support for a mixin
that provides X-Tag’s “events” sugar, but leaves out the
mixin for automatic node finding. The point is that frameworks and apps can
opt in to the component features they want.
-
In this approach, web component class definition is generally kept separate
from custom element registration. That is, there’s no required entry
point like Polymer() to both create the class and register it in a single
step. We personally feel that keeping those two steps separate makes each
step clearer, but that’s a matter of taste. If you feel that combining
those steps makes your code easier to write or read, it’s easy enough
to accomplish that. The X-Tag demo shows how a framework could define an
entry point for class definition and registration.
-
The mixin architecture explicitly supports custom rules for composing
specific properties. That’s intended for cases like the
“properties” key in Polymer behaviors, where object values
supplied by multiple mixins need to get merged together. The Composable
kernel supports that, although none of the demos currently show off that
feature.
Taken collectively, these core component mixins form the beginnings of a
deliberately loose but useful framework for web component development.
They’re still rudimentary, but they already provide much of
what we need from a layer like polymer-micro. We think this strategy confers a number of advantages:
-
This is closer to the metal.
The only new thing here is the concept of a mixin. Everything else is part
of the web platform. There’s no special class constructor required to
perform black-box operations on a component. There’s nothing new to
master (like React’s JSX or Polymer’s <dom-element>)
that’s not already in the platform. There’s no sugaring provided out
of the box — and that’s a good thing.
-
Each mixin can focus on doing a single task really well.
For example, the TemplateStamping mixin just creates a shadow root and
stamps a template into it. The only real work it’s doing is to
normalize the use of native vs polyfilled Shadow DOM — that is,
the work you’d need to do anyway to work on all browsers today. Given
the boilerplate nature of that task, it’s reasonable to share that
code with a mixin like this. Once all the browsers support Shadow DOM v1
natively, this mixin could be simplified, or dropped entirely, without
needing to rearchitect everything.
-
You can stay as close to/far from the platform as you want.
Most user interface frameworks take you far away from the platform in one
giant step. Here you have fine-grained control over each step you take
toward a higher level of abstraction. Each mixin takes you a tiny bit
further away from the platform, and in exchange for the efficiency boost the
mixin provides, you have to accept some trade-offs: performance, mystery,
etc. That’s an unavoidable price for sharing code, but at least this
way you can decide how much you want to pay.
-
There’s a potential for cross-framework mixins.
If multiple web component frameworks could agree on a mixin architecture,
there’d at least be a chance we could share good solutions to common
higher-level problems at the sub-component level. When Component Kitchen
creates a mixin to support, say, accessibility in a list-like web component,
it would be great if we could make that available to people developing
list-like web components in other frameworks. While any framework could in
theory adopt some other framework’s mixin format, mixins are usually
intimately tied to a framework. Explicitly deciding to factor mixins into a
separable concept may make cross-framework mixins more feasible.
It’s worth remembering that web components are, by their very nature,
interoperable. If you decide to write a component using an approach like this,
it’s still available to someone who’s using a different framework
(Polymer, say). The reverse is also true. That means any team can pick the
approach that works for them, while still sharing user interface elements at
the component level.
As we’re experimenting with these mixin ideas in prototype form,
we’re opportunistically trying some other technology choices at the same
time:
-
These mixins are written in ES6. As the polymer-micro blog post mentioned,
we’re finding that ES6 makes certain things easy enough in JavaScript
that we can use the DOM API directly, rather than relying on a framework for
sugar. Transpiling with
Babel
feels like a fine temporary solution while waiting for native ES6
implementations in all browsers.
-
While the core component mixins are written in ES6, they can still be used
by plain ES5 apps. The
Hello World (ES5)
demo shows this in practice.
-
The TemplateStamping mixin assumes use of the Shadow DOM polyfill if you
want to support browsers that don’t yet support Shadow DOM. If the
majority of the world’s web users have a Shadow DOM v1-capable browser
by, say, the second half of 2016, we think businesses might accept using the
polyfill to support the shrinking number of users with older browsers. To
the extent using that polyfill has issues, those issues should diminish over
time.
-
We use JavaScript module imports as the dependency mechanism rather than
HTML Imports. That lets us leverage tools like
browserify
for concatenation rather than Vulcanize. So far, that’s working okay.
ES6 template strings let us easily embed HTML directly inside of JavaScript
files, instead of putting JavaScript code inside of HTML files as we did
with HTML Imports. Both packaging formats can work, but given the need for
JavaScript modules anyway, it seems worthwhile for us to see what we can
build with modules alone. One thing we miss: an equivalent of HTML
Import's “document.currentScript” so that a module can load
a resource from a path relative to the JavaScript source file.
-
We’re trying out npm as the primary means of component distribution.
We think that npm 3’s support for dependency flattening addresses much
of the need for Bower. We think the combination of ES6 modules and npm may
prove to be a better way to distribute components, so we’re trying
that out with this prototype to see if we could make the switch to dropping
Bower entirely. So far, this feels very good.
This mixin-based component framework isn’t done, but feels like
it’s reached the point where it’s “good enough to
criticize”. Please share your feedback at;
@Component
or
+ComponentKitchen.
November 30, 2015
We’ve been searching for a way to quickly create web components by composing
together pre-defined user interface behaviors, and we have some progress to
share on that front.
Web components are a great way to package user interface behavior, but they
may not be the most interesting fundamental unit of behavior. There
are certain aspects of behavior which you’d like to be able to share across
components: accessibility, touch gestures, selection effects, and so on. Those
things aren’t top-level components in their own right; they’re abstract
aspects of behavior.
This is something like saying that a chemical molecule is not the fundamental
unit of physical behavior — the atoms that make up the molecule are. But you
can’t generally handle solitary atoms; atoms react and organize themselves
into molecules. Likewise, a browser can only handle web components, not
abstract behaviors. If we imagine a web component as a molecule, what’s the
equivalent of an atom? That is, can we decompose a web component into a more
fundamental coding unit?
One way to answer this question is to consider a web component as a custom
element class. Is there a way we can decompose a class into its fundamental
abstract behavioral aspects? The usual way to compose class behavior in
JavaScript is with
mixins, so perhaps mixins
can form the fundamental unit of user interface behavior. That is, we’d like
to be able to compose mixins together to create web component classes.
For that purpose, mixins present some challenges:
-
The simplest approach to JavaScript mixins will overwrite existing on a
class, but that’s not always desirable. Many web component behaviors want to
augment an existing method like
createdCallback
. That is, the
base class’ method and the mixin’s method should run.
-
There’s no standard JavaScript implementation of mixins. While many user
interface frameworks (React, Polymer, etc.) include a mixin strategy, those
are intimately tied to the framework itself. You can very quickly write a
simple function to copy a mixin onto a class prototype, but that won’t
accommodate the range of complexity needed to define interesting web
component behaviors.
We thought it would be interesting to create a general-purpose mixin
architecture that’s flexible enough to serve as a foundation for creating web
components in plain JavaScript. The initial result of that work is a facility
we call
Composable.
Composable takes the form of a general-purpose factory for composing classes
and objects from mixins. The most interesting part about it is its use of
composition rules that let you decide how a mixin’s properties and
methods should be combined with those of the class you’re adding the mixin to.
Composable itself is entirely independent of web components, but we’ve
designed it to serve as a micro-kernel for web component library or framework.
An example in the Composable
ReadMe
illustrates how it could be used to construct web components:
// Create a general-purpose element base class that supports composition.
let ComposableElement = Composable.compose.call(HTMLElement, Composable);
// A mixin that sets an element's text content.
class HelloMixin {
createdCallback() {
this.textContent = "Hello, world!";
}
}
// A sample element class that uses the above mixin.
let HelloElement = ComposableElement.compose(HelloMixin);
// Register the sample element class with the browser.
document.registerElement('hello-element', HelloElement);
// Create an instance of our new element class.
let element = document.createElement('hello-element');
document.body.appendChild(element); // "Hello, world!"
We’ll share more on this direction as we go, but for now we wanted to share
this as a fundamental building block. Even if you’re not creating web
components, you could use Composable to give your application or framework a
flexible mixin architecture.
November 2, 2015
Our
Basic Web Components
project currently creates its components using Google's
Polymer framework, but we've
been evaluating the use of the smaller polymer-micro core as a replacement for
full Polymer. The polymer-micro core appears to be a useful web component
framework in its own right, and may provide nearly everything a component
library like ours needs.
Is Polymer the best choice for our open project?
We believe that
some amount of framework is necessary to create web components. For a very long time, Polymer has been the primary web component framework.
We love Polymer! However, we feel that Polymer has grown to the point where
writing a Polymer app feels distinctly different from writing a typical HTML
app.
Polymer provides numerous helpers that reduce the amount of copy-and-paste
boilerplate code required to invoke standard DOM features. In current
parlance, it wants to make component code as
DRY as
possible. For example, Polymer provides a "listeners" key for wiring
up event handlers with less code than a direct invocation of the underlying
addEventListener(). Polymer's "properties" key similarly
simplifies definition of component properties instead of directly defining
property getter/setters on the component prototype and marshalling attributes
to properties with attributeChanged().
We think Polymer's goal is commendable. If you can afford to train up a
team of developers on Polymer's specific way of doing things, your team
should be able to crank out web UI code very efficiently.
But as with any higher-level abstraction, these helpers trade off clarity and
simplicity for a certain degree of magic and complexity. Each reduction in the
amount of component code a developer must write forces an increase in the
arcane Polymer-specific knowledge a developer must acquire to write
or even read component code. It also hides details that may
complicate debugging and maintenance.
For our open source project, those second-order effects reduce the potential
pool of project contributors. Our priority is not time-to-market, but rather
creating code which is self-evident to our open source users and potential
contributors. Although 1 line of Polymer code might do the work of 3 lines of
standard web code, if those 3 lines are more understandable to a wider base of
developers, we might prefer the longer, clearer version.
Carrying forward the burden of backward compatibility
Another issue we're grappling with is Polymer is very much designed for
this era immediately before web components emerge with native support across
all mainstream browsers. Polymer wants, quite reasonably, to accommodate
browsers that don't yet support web components. At the same time, Polymer
also wants to deliver decent performance, notably on Mobile Safari, which
at this time does not support native Shadow DOM. Rather than use the
full Shadow DOM polyfill, Polymer introduced its own
Shady DOM
approach for approximating Shadow DOM behavior on older browsers.
Shady DOM is an impressive technical accomplishment. But having written a
great deal of Shady DOM code this year, it's our subjective opinion that
Shady DOM code feels clunky. Even after months of writing Shady DOM code,
wrapping DOM calls with Polymer.dom() still doesn't feel natural. And
it's hard to explain to someone why they can't just call
appendChild(), but have to call Polymer.dom().appendChild() instead. And while
Polymer.dom() is somewhat future-proof, it doesn't feel future-facing. It
erodes the original, extremely elegant vision for Polymer and the web
components polyfills: to let people to write web components for future web
browsers today.
The alternative to Shady DOM today is to use the
full Shadow DOM polyfill. That entails slower performance and — given inevitable leaks in the
abstraction — a greater potential for mysterious bugs. On the plus side, the
full Shadow DOM polyfill lets one write clearer, future-facing code. With all
the major browser vendors on board with Shadow DOM v1, the need to download
and use the Shadow DOM polyfill on most devices should fade over the course of
2016.
We're also excited about the advent of ES6, with features like arrow
functions that let code be more concise. Writing an addEventListener() call is
no longer a substantial burden in ES6, or at least not enough to warrant a
parallel system for event listener wiring. And using built-in ES6 classes
feels better than calling a purpose-focused class factory like Polymer().
Considering polymer-micro instead of full Polymer
It turns out that, underneath all of Polymer's DRY magic, there's a
very clean, simple core called polymer-micro. Polymer is helpfully constructed
in three layers: full Polymer on top, a smaller polymer-mini below that, then
a tiny polymer-micro at the bottom. The
documentation
describes polymer-micro as "bare-minimum Custom Element sugaring".
Rather than use the full Polymer framework, we've been investigating
whether polymer-micro on its own could meet our needs. Building on top of
polymer-micro confers a number of advantages over writing our own web
component framework:
-
Google has already invested an enormous amount of money and resources
developing Polymer, including polymer-micro, and will probably continue to
do so for the foreseeable future.
-
The thousands of people using full Polymer today are also using
polymer-micro. That means lots of testing of the polymer-micro core.
-
From its commit history, polymer-micro looks fairly stable. A substantial
amount of future Polymer work will likely happen in the upper levels.
-
In terms of file size, polymer-micro is significantly smaller than full
Polymer, although we don't see that as a huge advantage.
-
Relying on a small core like polymer-micro makes it easier to migrate to
another framework when a better framework comes along.
The polymer-micro layer happens to provide most of the features upon which
Basic Web Components depend:
- Custom element registration.
- Lifecycle support.
- Declared properties.
- Attribute deserialization to properties.
- Behaviors.
On the flip side, Basic Web Components use a number of Polymer features which
polymer-micro does not provide:
-
Shadow root Instantiation. If you use polymer-micro, you're expected
to create a shadow root yourself.
-
Templates. If you want to use the <template>
element to
define initial content of a component's Shadow DOM, you need to manage
that yourself.
-
Shimming for CSS styles. The full Shadow DOM polyfill requires that CSS be
transformed to minimize styles leaking across a custom element boundaries.
Full Polymer takes care of that for you, but using polymer-micro directly
means that style shimming becomes your concern.
-
Automatic node finding. This lets your component code refer to a
sub-element <button id="foo">
with
this.$.foo
. Complex components need a consistent and easy way
to refer to subelements within the local Shadow DOM. Polymer's
this.$
syntax satisfies those criteria, although we're
really torn as to whether that sugar is worth it. It saves keystrokes, but
isn't a web-wide convention. It may give an unfamiliar flavor to web
component code.
-
ready() callback. Many of the Basic Web Components use Polymer's
ready callback
to initialize themselves. Polymer takes pains to ensure that any Polymer
elements inside a component's local Shadow DOM have their own ready
callback fired before the outer component's ready callback is
fired.
-
CSS mixins. This is Polymer's current answer for visual themes for
components. It's based on a not-yet-standard proposal for extensions
to CSS. Without full Polymer, you have to invent your own theming
architecture.
All the above features are provided at the levels above polymer-micro: either
polymer-mini or full Polymer. However, those upper levels bring along a number
of features we don't use, or would be happy to drop. Those features
include:
- Data binding
- Event listener setup ("listeners" key)
- Annotated event listener setup (in markup)
- Property change callbacks
- Computed properties
- reflectToAttribute
These features all have some appeal, but in our estimation may add more
complexity than they're worth to an open source project aiming for a
general developer audience.
Lastly, there are a few higher-level Polymer features we have to use, but wish
we didn't have to:
-
<dom-module>
. This is used as a wrapper around a
<template>
element, but it's hard to fathom why
<dom-module>
is necessary. It seems designed to support
a use case we don't care about: defining a template in one file, then
using it in a component defined in a separate file. Yet by far the most
common way to define a Polymer component is to put its template and script
definition in the same file. It's unfortunate full Polymer doesn't
offer a better way to use a real <template>
directly.
(Although a trick does let you accomplish that in an unofficial way; see
below.)
-
Polymer.dom(). As noted above, this feels awkward, like you're not
using the web. It's also confusing to experienced web developers
looking at Polymer code for the first time.
Prototyping a minimal component framework on top of polymer-micro
With the above motivation, we considered the question:
What is the smallest amount of code that must be added to polymer-micro to
create a web component framework that meets our project's needs?
This experiment entailed a fair amount of spelunking in the Polymer codebase.
That exploration informed the creation of a little prototype web component
framework called
polymer-micro-test
that uses only polymer-micro as its base. In this prototype framework, we
wrote a small amount of code (minimalComponent.js) to implement the 5 numbered features above which we want but are missing in
polymer-micro.
We then used the prototype framework to create a couple of sample components,
such as a sample test-element
component. A
live demo
of a simple
page
shows the test-element component in use. By virtue of using the full
polyfills, components created in this prototype framework can run in all
mainstream browsers.
Overall, the results of this experiment were fairly positive. Looking at each
feature in turn:
-
Creating a shadow root yourself is easy. This is only necessary for
components with templates (next point).
-
Stamping out a template is easy. The smallest amount of code we could
envision for this is for a component to declare a "template"
property. This can be used in conjunction with HTML Imports for a fairly
clean connection between the script and the template:
<template id="test-element">
... template goes here ...
</template>
<script>
Polymer({
is: 'test-element',
template: currentImport.querySelector('#test-element')
});
</script>
Aside: we really like being able to use a plain
template
to define component content. It turns out that you
can actually do this in full Polymer today, although it's something of
a trick that depends upon your component defining an undocumented
_template
variable. See this
gist, which works in full Polymer.
-
Shimming CSS styles took a little investigation, but it turns out the full
Shadow DOM polyfill exposes its CSS-shimming code as ShadowCSS. The first
time this test framework is going to stamp a template, it just invokes
ShadowCSS to shim any <style>
elements found in the
template. It then saves the shimmed result for subsequent stamping into
the shadow root.
-
Automatic node finding. If we conclude we really need this feature,
it's not that hard to implement ourselves. Right after the test
framework stamps a template, it queries for all the elements in the shadow
tree that have an id
attribute, then adds those elements to
this.$
. This gives us a type of automatic node finding that
meets our needs. Polymer's own implementation of the same feature is
much more complex. It appears to do a lot of tree-parsing so in
preparation for data binding, but since we don't need data binding, we
don't need to do that work.
-
The ready() method is a bit of a puzzle to us. The Shadow DOM spec already
defines two callbacks, createdCallback() and attachedCallback(), that can
cover most of what we're currently doing in ready(). One issue is that
createdCallback() and attachedCallback() are synchronous, while the
Polymer ready() code takes enormous pains to handle asynchronous calls.
That is likely necessary to support their asynchronous data binding model.
That is, if your component has a sub-component with data bindings, you
want all those asynchronous data bindings to settle down first before your
top-level component does its own initialization. Since we're not
interested in data binding, however, it's not clear whether we need
ready(). Our sample element just uses the standard callbacks.
-
CSS mixins. This remains an open question for us. It's hard to imagine
what we could do to allow component users to theme our components. At the
same time, we're not convinced that the not-yet-standard CSS mixins
are going to actually become a standard. The troubled history of
vendor-prefixed CSS feature experiments suggests that one company's
early interpretation of a hypothetical, future CSS mixin
"standard" might significantly complicate things down the road
when a real standard is finally established.
This small prototype framework delivers most of the features required by Basic
Web Components. The main exception is that it offers no facility for component
theming (point #6 above).
Some other notes:
-
Because polymer-micro supports Polymer
behaviors
(mixins), we were able to implement all of the prototype's features in
a behavior. That's quite elegant. It means our sample components can
use those features simply by calling the standard Polymer() class factory
and listing this prototype behavior in the "behaviors" key. It
was a nice surprise that we didn't have to create our own component
class factory for that.
-
To take advantage of Polymer's own attribute-to-property marshalling
feature, we had to invoke an undocumented internal method in
polymer-micro. If more people were building directly on top of
polymer-micro, such facilities could probably be promoted to supported
features.
-
Taking advantage of existing Polymers facilities (both official and
undocumented) and polyfill features like ShadowCSS means that our little
prototype framework can be tiny, less than 1K in size. That gets added to
the size of polymer-micro, which is currently about 15K uncompressed.
Combined, that 16K is a lot smaller than the full Polymer, about 105K
uncompressed.
-
Any decrease in framework file size is more than offset by the need to use
the full web component polyfills, which are much larger than the
"lite" version used with Shady DOM. Still, since we think the
need for the full polyfills will drop over the course of 2016, we're
not particularly concerned about that.
Conclusions
While this is just an experiment, it's intriguing to consider using
polymer-micro as the basis for a minimalist web component framework.
-
A minimalist framework leads to component code which we believe is easier
for a general web developer to read.
-
Letting a component developer work at a lower level of abstraction —
"closer to the metal" — means they have a greater capacity to
diagnose problems when things inevitably go wrong. There's less
mystery to clear away, so problems can be understood and fixed, rather
than worked around.
Despite these advantages, we're not yet ready to say that we're
actually going to use this prototype to create components. As noted
above, our goal is to foster a codebase that can be readily comprehensible to
a wide audience of web developers. Using a proprietary framework, even a tiny
one, impedes that goal. (Basic Web Components traces its ancestry to an
earlier component library called QuickUI which
never gained critical mass, in part because it was built on a proprietary framework.)
Using polymer-micro as the basis for a proprietary framework would be better
than writing a framework from scratch, but every bit of code added on top of
polymer-micro runs the risk of producing a framework in its own right — one
distinct and unfamiliar to our developer audience.
A minimalist strategy like this would only have meaning to us if it's
shared by other people. To that end, we've begun talking with other web
component organizations to explore this idea a bit further. We're not sure
where that discussion will go, but it's interesting, and might bear fruit
in the form of a new, minimalist web component framework. If you'd be
interested in participating in that discussion, please ping us at
@ComponentK.
October 26, 2015
You may hear someone say they avoid using React, Polymer, Angular, or some
other framework du jour, and that they prefer to write their front end code in
vanilla JavaScript instead. But when it comes to writing web components, it
seems everybody ends up writing atop a framework — even if it's a
tiny framework of their own devising. Production web components written in
vanilla JS appear to be very rare.
It seems there's just a bit too much work involved to meet even baseline
expectations for a custom element. To handle instantiation, for example, you
might need to:
Create a shadow root on the new instance.
Stamp a template into the shadow root.
-
Marshall any attributes from the element root to the corresponding
component properties. This process breaks down into more work, such as:
- Loop over the set of attributes on the element.
-
For each attribute, see if the component defines a corresponding
property. If you want to support conventionally hyphenated attribute
names ("foo-bar"), you'll want to first map those
attribute names to conventionally camel-cased property names (fooBar).
-
If the type of a target property is numeric or boolean, parse the
string-valued attribute to convert it to a value of the desired type.
- Set the property to the value.
Given this amount of work to simply instantiate the component, it's easy
to see why most efforts to create interesting components typically end up
relying on shared code. You might write a single web component in vanilla JS,
but as soon as you start your second component, you'll be dying to factor
the boilerplate into shared code… And now you're constructing a framework.
That's not necessarily a bad thing. It only means that, when you hear
someone say that they want to write a component-based app, but don't want
to use any framework at all, you have to take that with a grain of salt.
It's possible the person has — perhaps unintentionally — ended up building
the foundations of their own web component framework.
Does it matter whether that code is called a framework? Wikipedia enumerates
these
software framework
hallmarks:
-
inversion of control (control flow dictated by the framework, not the code
on top of it)
- default behavior
- extensibility
- non-modifiable framework code
Given this definition, it seems hard to conclude that frameworks are bad per
se. Surely there are good frameworks as well as bad frameworks.
Perhaps one reason people shy away from the concept of a framework is that, as
a framework achieves higher levels of abstraction, it becomes something
tantamount to a domain-specific language. If you and I both thoroughly
understand JavaScript, but we are using different JavaScript frameworks, then
in practice we may not find each other's code mutually intelligible.
Since the term "framework" can provoke strong negative reactions,
authors of such code may actually care whether their code is labeled a
framework or not. Google, for example, seems to take great pains to avoid
describing its own Polymer project as a framework. They call it a
"library", which sounds perhaps smaller or less threatening. But
Polymer easily meets all of the above framework criteria. For example,
Polymer's internal asynchronous task-scheduling infrastructure establishes
the flow of control in a Polymer application, determining when to invoke
component lifecycle callbacks and property observers.
Whether you like the idea of a framework or not, when it comes to web
components, the DOM API is so rudimentary that, in practice, that API alone
does not provide a sufficient framework for web component development. As long
as that remains the case, the use of a JavaScript web component frameworks
seems unavoidable. If you really, really want to avoid writing or using code
that meets the above definition of "framework", perhaps you can do
so and still be productive, but that seems like a hard way to go.
For our own work, we want to be using a popular web component
framework, be it Polymer or something else. If our alternative were to write a
proprietary, ad hoc framework of our own, which was shared by no one else, we
would likely waste a lot of time solving problems others have already solved.
October 19, 2015
We launched this Component Kitchen site in April 2014 with a web component
catalog as its centerpiece. Today we're shutting down that catalog so we
can focus on web component consulting and our open web components projects,
including the
Gold Standard Checklist for Web Components
and the general-purpose
Basic Web Components
library.
Running the component catalog was a great way to learn about building
production applications using web components and Google's Polymer library.
But we've come to feel that the catalog's utility is limited, and it
no longer makes economic sense to continue it.
-
As in any online space, much of what is freely given publicly is,
unfortunately, junk. Many registered components are "Hello,
world!", or trivial wrappers of a third-party library, or utilities
so specialized as to be useful to no one but their authors. That makes it
hard to find something worth using.
-
Even potentially useful components are seldom written at a level of
production quality. We tried to start a "component of the week"
blog post series in late 2014. We would sift through a dozen components
just to find one that even worked outside the narrow confines of
its demo. This isn't necessarily the fault of the component authors —
it's really hard to write a solid web component today. (That's one
reason we believe that establishing a Gold Standard for web component
quality is a vital project.) We also believe that the web user interface
framework vendors, such as Polymer and X-Tag, need to invest more heavily
in making it easy to create components that meet that standard.
-
Our catalog pages showed GitHub stars, but the direct GitHub metadata for
projects like web components isn't all that useful in assessing a
project's quality. A GitHub star doesn't tell you whether anyone
has actually tried a project, whether they thought it was good, whether
they used it, or whether they're still using it. We have some ideas
for how to compile better indications of a web component project's
utility, but they would require a larger investment of our resources than
we can afford.
-
People want to see a component project's own documentation, not a
third-party repackaging of that documentation. We think that's true
for any online catalog, including successful ones like npm. When we find a
package on npm, we never read the package details there, because we'll
have to go to the actual GitHub repository to form an assessment of
whether it's worth actually trying the package.
-
GitHub itself could easily improve its own search and ranking features to
the point where external catalogs would struggle to add value. We'd
rather not compete with them.
-
As a result of the above points, if you want to find a foobar web
component, it's easier to google for "foobar web component"
than to consult a purpose-built catalog. If you follow the search results
to GitHub, you'll end up where you wanted to go anyway.
-
It turns out that a coherent collection of components designed
and implemented together is more interesting than individual components
from multiple authors. While we had plans to feature component
collections, we didn't think the payoff would be high enough. Again,
the collection creator is more interested in driving traffic to their
collection's own site, and that's probably what the component
customer is interested in seeing too.
Retiring the catalog lets us invest more time in the projects that we think
matter more. People who want a component catalog can use
customelements.io, which has seen
many improvements lately. Also, as a service to people who have made deep
links to our catalog's component pages, for the indefinite future we will
continue to serve up tombstone pages at those URLs that offer a link to the
corresponding repository on GitHub.
When we started our catalog, there were only about 40 publicly registered web
components — now the number is in the thousands. We really appreciate all of
the people who visited our catalog in the last year and a half, who built a
great component we could feature, and who took the time to give us feedback or
a shout-out on social sites. We're still excited to be working in this
transformative web components space, and look forward to sharing our ongoing
work here soon.
May 27, 2015
For the last few months, we’ve been excited to help a new open project get off
the ground: the
Gold Standard checklist for web components. The checklist’s goal is to define a consistent, high quality bar for web
components.
We believe web components should be every bit as predictable, flexible,
reliable, and useful as standard HTML elements. When someone is working in
HTML, they should be able to work as if all elements work the same. They
shouldn’t need to learn a special set of constraints or limitations that apply
to specific custom elements. They already know how standard HTML elements
behave; new custom elements should behave just like that.
The standard HTML elements establish an incredibly high bar for quality. For
example, you can use the standard elements in any combination, and they’ll not
only work, the result is usually predictable and reasonable. But, without a
great deal of care, custom elements don’t support that same degree of
flexibility by default. It’s all too easy to create a custom element that only
works when it’s used in a very particular way.
The project began by defining what it is that makes a standard HTML element
feel like a standard HTML element. It seems no one before ever wrote
down all the criteria that govern the expected behavior of a new standard HTML
element. We all generally know how HTML elements should behave, and through
careful design and testing, new standard elements eventually measure up to our
expectations.
You can think of this as a Turing Test for elements: if you were to encounter
an unfamiliar element in HTML, could you tell whether it was a new standard
element or a custom element? For most custom elements today, it wouldn’t take
too long to discover some unexpected quirk or limitation in the element that
would reveal its custom element nature. This is not for lack of dedication on
the component author’s part. It could simply be the case that they hadn’t
considered some aspect of standard element behavior.
To address that, the Gold Standard checklist captures the expected behavior of
a standard HTML element in a form that can guide the creation of new custom
elements. The checklist covers a wide range of topics, from accessibility to
performance to visual presentation. A component that meets that quality bar
should be able to generally satisfy all the expectations of people using that
component. This will greatly facilitating the component’s adoption and use.
A variety of people, particularly from Google, have already contributed to the
Gold Standard checklist in its draft stages, and continue to make
contributions to the checklist in its new wiki form. The initial focus of the
project has been to develop a solid set of top-level checklist items. It’s the
hope of the project contributors that every item on the list will be backed by
a detailed explanation of the checklist item: why it’s important, examples of
what to do or not to do, sample source code, and other resources.
If you’re interested in creating or using high-quality components, please take
a look at the checklist. The project welcomes comments and suggestions as
issues, or direct contributions through pull requests.
January 12, 2015
We’ve just posted an
interactive web components tutorial
that teaches the basic concepts with editable live demos. We hope you’ll find
that this tutorial:
- Explains the key principles at a quick pace under your control.
-
Illustrates the principles through engaging, live HTML demos. We think the
best way to learn a technology is to play with it yourself!
-
Introduces concepts from the perspective of the component user. Most web
component presentations start with the assumption that you’re a developer
creating a component from scratch. But one of the great things about web
components is that components can be used by a wider audience than
the people who can create components.
-
Only requires knowledge of HTML. If you’re one of the many people who has
read or written HTML, but don’t consider yourself to be a developer, this is
the tutorial for you. And if you are a developer, we still think this
presents the concepts in an approchable and useful manner.
Please check it out, share it, and let us know what you think!