Jan MiksovskyArchive AboutFeedSearchContact

Making a small JavaScript blog static site generator even smaller using the general async-tree library

Using the async-tree library substantially cuts down the source code for a minimalist static site generator (SSG) in JavaScript, at a very modest cost in dependencies. The result is still fast and flexible.

In the first post in this series, I recreated a simple blog in Astro that felt complicated. I rewrote the blog in plain JavaScript with zero dependencies. This post discusses yet another rewrite, this one predicated on sharing code.

You can look at the final async-tree blog source code and the live site.

Okay, maybe a few dependencies

The zero-dependency version felt quite good, although insisting on no dependencies was a little extreme.

While half the source code was unique to the project, the features in the other half can be cleanly handled by libraries, like:

These are both pure functions, a much easier kind of dependency to take on. You decide when to call the function and what input to give it; it gives you back a result without any side effects. This contract greatly reduces the potential for surprise or frustration.

The async-tree library

The remaining sharable code in the zero-dependency version comprises generic utility functions:

Since these are completely generic, they’re worth sharing — so over the past 4 years I’ve been working on a library called async-tree that handles these and other tasks.

The async-tree library builds on the idea that most of the hierarchical structures we work with can be abstracted to asynchronous trees. When creating a site, we rarely care about how data is stored; we just want to render it into static resources like HTML.

Our collection of markdown documents, for example, is physically stored in the file system — but that’s irrelevant to our static site generator. All we care about are the keys (the file names) and the values (the markdown text with front matter). We can think about this collection of markdown documents as an abstract tree that could be anywhere in memory, on disk, or in the cloud:

Tree diagram showing a root node pointing to three markdown files

If all we want to do is traverse this tree, APIs like Node’s fs API are overkill. We just want a way of getting keys and values. This is much closer in spirit to a JavaScript Map. Unlike Map, we can handle more cases by making our methods async.

This is the AsyncTree interface:

export default interface AsyncTree {
  get(key: any): Promise<any>;
  keys(): Promise<Iterable<any>>;
  parent?: AsyncTree | null;
}

This is an interface (not a class) that’s easy to define for any almost any collection-like data structure. Such async collections can be nested to form an async tree — a tree of promises.

Abstractions come a cost. In exchange for a considerable degree of power and flexibility, you have to wrap your brain around an unfamiliar concept. “A tree of promises?” It might take a while to wrap your head around that.

I will say that, from several years of experience, it’s ultimately very beneficial to view software problems like static site generation as reading, transforming, and writing async trees.

Example: reading markdown, reading posts

As an example, to get the first file from our markdown folder, we can construct an AsyncTree for that folder using the library’s FileTree helper, then call the tree’s get method:

import { FileTree } from "@weborigami/async-tree";
const files = new FileTree(new URL("markdown", import.meta.url));
const first = await files.get("2025-07-04.md");

Here FileTree is roughly similar to our quick-and-dirty zero-dependency code that read a folder tree into memory. But FileTree is more efficient because it doesn’t read the complete set of files into memory; it only does work when you look up a key’s value with get.

Our posts.js function turns that collection of markdown file buffers into a completely different form: a set of plain JavaScript objects with .html names that are stored in memory. Despite these significant differences, if we want to get the first post from that collection, we can still use the same get method:

import posts from "./src/posts.js";
const first = await posts.get("2025-07-04.html");

Totally different data structure, same get method.

Example: pagination

Another reason to work with collections as abstract trees is that a consistent set of operations can be defined for them regardless of their underlying storage representations.

For example, the zero-dependency version includes a one-off paginate helper that accepts a collection of posts and returns an array grouping the posts into sets of 10. The paginated posts can then be mapped to HTML pages using the project’s own mapObject helper function.

// Group posts into pages of 10
const pages = mapObject(paginate(posts, 10), (paginated, index) => [
  `${parseInt(index) + 1}.html`, // Change names to `1.html`, `2.html`, ...
  multiPostPage(paginated), // Apply template to the set of 10 posts
]);

The async-tree library offers the same functionality as a general paginate function which can be applied to a tree defined by any means, including our set of posts. The paginated results can then be turned into HTML with another generic tree operation, map.

// Group posts into pages of 10
const pages = map(await paginate(posts, 10), {
  extension: "->.html", // Add `.html` to the numeric keys
  value: multiPostPage, // Apply template to the set of 10 posts
});

Mapping the values of a collection often implies changing the file extension on the corresponding keys, so the map function includes an extension option to easily add, change, or remove extensions.

Site definition

As with the zero-dependency version, the async-tree version of the blog defines the overall structure of the site in extremely concise fashion in site.js:

// Group posts into pages of 10
const pages = map(await paginate(posts, 10), {
  extension: "->.html", // Add `.html` to the numeric keys
  value: multiPostPage, // Apply template to the set of 10 posts
});

// Convert posts to a feed object in JSON Feed schema
const feed = await jsonFeed(posts);

//
// This is the primary representation of the site as an object. Some properties
// are async promises for a single result, others are async trees of promises.
//
export default {
  "about.html": aboutPage(),
  assets: new FileTree(new URL("assets", import.meta.url)),
  images: new FileTree(new URL("../images", import.meta.url)),
  "index.html": pages.get("1.html"), // same as first page in pages area
  "feed.json": JSON.stringify(feed, null, 2),
  "feed.xml": jsonFeedToRss(feed),
  pages,
  posts: map(posts, singlePostPage),
};

That’s the whole site. This is the most concise way I know to define a site in JavaScript.

I find this kind of concise overview invaluable when I return to a project after a long break, and a quick glance refreshes my understanding of the site’s structure.

Build

Once the site is defined, building the site is just a matter of copying files from the virtual world to the real world. Here’s the whole build.js script:

import { FileTree, Tree } from "@weborigami/async-tree";
import site from "./site.js";

// Build process writes the site resources to the build folder
const buildTree = new FileTree(new URL("../build", import.meta.url).pathname);
await Tree.clear(buildTree); // Erase any existing files
await Tree.assign(buildTree, site); // Copy site to build folder

The async-tree library provides a set of helpers in a static class called Tree. These provide a full set of operations like those in the JavaScript Map class so that AsyncTree interface implementors don’t have to define those methods themselves, making it easier to create new AsyncTree implementations to read data directly out of new data sources.

Assessment

We can compare this async-tree version of the blog with the earlier Astro and zero-dependency versions. All three versions create the same site.

The async-tree version makes strategic use of libraries for markdown processing, RSS feed generation, and manipulating objects and files as trees. This removes over half the code from the zero-dependency version, so async-tree has only 9K handwritten source code, the smallest of the three:

Chart comparing source code size across three blog versions, async-tree is smallest

This comes at a modest cost of 1.5Mb of node_modules, or about 1% of the 117Mb of node_modules for the Astro version:

Chart comparing node modules across three blog versions, zero-dependencies is smallest

The async-tree version is still extremely fast, just a hair slower than the zero-dependency version:

Chart comparing build times across three blog versions, zero-dependencies is fastest

Nice!

Impressions

Like the zero-dependency version, this async-tree version was fun to write.

The introduction of a limited set of dependencies to this project felt fine. The small libraries I’m using here all do their work as pure functions, so I’m still in control over what’s going on. I don’t have to wrestle with plugins, lifecycle methods, or complex configuration like I would have to in a mainstream SSG framework. I’m just calling functions!

Debugging async JavaScript code is harder than debugging regular, synchronous code. The debugger I use in VS Code does a fairly good job of it, but it’s still not possible to inspect the value of variables across async stack frames. That can make it harder to figure out what’s gone wrong at a breakpoint.

That said, I once again made good use of the ori CLI to check various pieces of the site in the command line. That let me confirm that individual pieces worked as expected, as well as serve the site locally to inspect the evolving site.

All in all, I think this async-tree approach is a really interesting way to build sites. It’s significantly less JavaScript than the zero-dependency version, while it’s still very fast and light on package weight. You stay in control.

Since I wrote the async-tree library, I can’t provide an objective assessment of how difficult it is to use.

The library deserves more comprehensive documentation than it currently has; I’ve generally focused my documentation writing on the higher-level Origami language and its set of builtins. If you’re intrigued by this more foundational, general-purpose async-tree library, let me know. I can help you out and prioritize documenting it in more detail.

Improvable?

As small and focused as the source for this async-tree version is, it can be made even smaller! Next time I’ll revisit the original sample blog that started this post series and show the benefits of writing it in Origami.

Read the other posts in this series:

  1. Static site generators like Astro are actually pretty complex for the problems they solve
  2. This minimalist static site generator pattern is only for JavaScript developers who want something small, fast, flexible, and comprehensible
  3. Making a small JavaScript blog static site generator even smaller using the general async-tree library [this post]

This minimalist static site generator pattern is only for JavaScript developers who want something small, fast, flexible, and comprehensible

Configuring a complex tool can take more work that just coding the functionality you want from scratch. In the last post I described creating a simple blog in Astro, a popular static site generator (SSG). The Astro solution felt more complicated than the problem justified, so I rewrote the entire blog project from scratch in pure JavaScript with zero dependencies.

This went very well! I coded the blog in about a day, I can completely understand every part of it, and it’s very fast. Writing from scratch made it easy to achieve all of the requirements for the site (described in the appendix of the linked post above).

This isn’t a product but a pattern. If you’re familiar with JavaScript, there are only two small ideas here you might not have tried before. I think you’ll find it easier than you expect. I used JavaScript but you could just as easily do this in Python or any other language.

You can look at the final zero-dependencies blog source code and the live site.

What is a static site generator doing?

A static site generator reads in a tree of files representing the source content you create by hand and transforms it into a new tree of files representing the static files you deploy. That’s the core of what an SSG does.

To that end, an SSG also helps you with a variety of conventions about how the content is written or what form the resulting static files should take. For a blog, those conventions include:

Individually, each of those transformations is straightforward.

To write this SSG from scratch, we’ll need a way to represent a site overall, a way to read and write content, and a way to specify all those small transformations.

Plain objects and functions are all you need

A useful general principle in coding is to see how far you can get with plain objects and functions. (What JavaScript calls plain objects, Python calls dictionaries and other languages might call associative arrays.) When possible, functions should be pure — that is, have no side effects.

Applying this principle to writing a static site generator:

  1. Read the folders of markdown posts and static assets into plain objects.
  2. Use a sequence of pure functions to transform the posts object into new objects that are closer and closer to the form we want.
  3. Create additional objects for paginated posts, the feeds, and the About page.
  4. Put everything together into a single object representing the site’s entire tree of resources.
  5. Write the site object out to the build folder.

Idea 1: Treat a file tree as an object

Both a tree of files and a plain object are hierarchical, so we can use a plain object to represent a complete set of files in memory. The keys of the object will be the file names, and the values will be the contents of the files. For very large sites keeping everything in memory could an issue, but at the scale of a personal blog it’s generally fine.

If you’ve ever worked with Node’s fs file system API, then recursively reading a tree of files into an object is not a difficult task. The same goes for writing a plain object out to the file system. If you aren’t familiar with fs but are comfortable using AI, this is the sort of code that AI is generally very good at writing.

You can read my handwritten solution at files.js. You could just copy that.

Idea 2: Map objects

Once we have a bunch of files represented as a plain object, we next want some way to easily create new objects in which the files have been transformed.

The JavaScript Array class has a workhorse map function that lets you concisely apply a function to every item an array. Sadly the JavaScript Object class is missing a corresponding function to map the keys and values of an object — but we can create an object-mapping function ourselves:

// Create a new object by applying a function to each [key, value] pair
export function mapObject(object, fn) {
  // Get the object's [key, value] pairs
  const entries = Object.entries(object);
  // Map each entry to a new [key, value] pair
  const mappedEntries = entries.map(([key, value]) => fn(value, key, object));
  // Create a new object from the mapped entries
  return Object.fromEntries(mappedEntries);
}

We can use this helper like so:

import { mapObject } from "./utilities.js";

const object = { a: 1, b: 2, c: 3 };
const mapped = mapObject(object, (value, key) => [
  key.toUpperCase(), // Convert key to uppercase
  value * 2, // Multiply value by 2
]);
console.log(mapped); // { A: 2, B: 4, C: 6 }

This little helper forms the core of our transformation work. Since we’re treating a set of files as an object, we can use this helper to transform a set of one kind of file to a set of a different kind of file, renaming the files as necessary.

We will also often want to map just the values of an object while keeping the keys the same, so a related mapValues helper handles that common case.

Preparing the data for rendering

I find it useful to consolidate the work required to read in a site’s source content and prepare it for rendering in a single module. This does all the calculations and transformations necessary to get the content in a form that can be easily rendered to HTML, feeds, and other forms.

This project does that work in posts.js, which exports a plain object with all the posts data ready for render. We can call that a module a “pipeline”; it’s just a series of function calls.

The pipeline starts by using our files helper to read in all the posts in the /markdown folder into an object. The object’s keys are the file names; the values are Buffer objects containing the file’s contents. If we were to render the in-memory object in YAML it would look like:

2025-07-04.md: <Buffer data>
2025-07-07.md: <Buffer data>
 more posts 

We now begin a series of transformations using the aforementioned mapObject and mapValues helpers. The first transformation interprets the Buffer as markdown text with a title and body properties. This step also parses the date property from the file name and adds that. The result is that our collection of posts now looks like:

2025-07-04.md:
  title: Hello from the pond!
  date: 2025-07-04T17:00:00.000Z
  body: **Hey everyone!** Welcome to my very first blog post…
2025-07-07.md:
  title: Tiny home
  date: 2025-07-07T17:00:00.000Z
  body: When I first decided to move off-grid…
 more posts 

The next step is to turn the markdown in the body properties to HTML. Since the data type is now changing, we can reflect that by changing the file extensions from .md to .html. Result:

2025-07-04.html:
  title: Hello from the pond!
  date: 2025-07-04T17:00:00.000Z
  body: <strong>Hey everyone!</strong> Welcome to my very first blog post…
2025-07-07.html:
  title: Tiny home
  date: 2025-07-07T17:00:00.000Z
  body: When I first decided to move off-grid…
 more posts 

We’d like the page for an individual post to have links to the pages for the next and previous posts, so the next step calls a helper to add nextKey and previousKey properties to the post data:

2025-07-04.html:
  title: Hello from the pond!
  date: 2025-07-04T17:00:00.000Z
  body: <strong>Hey everyone!</strong> Welcome to my very first blog post…
  nextKey: 2025-07-07.html
2025-07-07.html:
  title: Tiny home
  date: 2025-07-07T17:00:00.000Z
  body: When I first decided to move off-grid…
  nextKey: 2025-07-10.html
  previousKey: 2025-07-04.html
 more posts 

Because the original markdown files have names that start with a date in YYYY-MM-DD format, by default the posts will be in chronological order. We’d like to display the posts in reverse chronological order, so the final step of the pipeline reverses the orders of entries in the top-level object. The posts that were at the beginning will now be at the end of the data:

 more posts 
2025-07-07.html:
  title: Tiny home
  date: 2025-07-07T17:00:00.000Z
  body: When I first decided to move off-grid…
  nextKey: 2025-07-10.html
  previousKey: 2025-07-04.html
2025-07-04.html:
  title: Hello from the pond!
  date: 2025-07-04T17:00:00.000Z
  body: <strong>Hey everyone!</strong> Welcome to my very first blog post…
  nextKey: 2025-07-07.html

This is the form of the final object exported by posts.js. It contains all the data necessary to render the posts in various formats.

These steps could all be merged into a single pass but, to me, doing the transformations in separate steps makes this easier to reason about, inspect, and debug. It also means that transformations like adding next/previous links are independent and can be repurposed for other projects.

Template literals are great, actually

Most static site generators come with one or more template languages. For example, here’s the PostFragment.astro template from the Astro version of this blog. It converts a blog post to an HTML fragment:

---
// A single blog post, on its own or in a list
const { post } = Astro.props;
---

<section>
  <a href={`/posts/${post.slug}`}>
    <h2>{post.frontmatter.title}</h2>
  </a>
  {
    post.date.toLocaleDateString("en-US", {
      year: "numeric",
      month: "long",
      day: "numeric",
    })
  }
  <post.Content />
</section>

This isn’t that bad, although it’s an odd combination of embedded JavaScript and quasi-HTML.

If you’re a JavaScript programmer, you can just use standard JavaScript with template literals to do the exact same thing. Here’s the equivalent postFragment.js function from the zero dependency version:

// A single blog post, on its own or in a list
export default (post, key) => `
  <section>
    <a href="/posts/${key}">
      <h2>${post.title}</h2>
    </a>
    ${post.date.toLocaleDateString("en-US", {
      year: "numeric",
      month: "long",
      day: "numeric",
    })}
    ${post.body}
  </section>
`;

It’s a matter of taste, but I think the plain JS version is as easy to read. It’s also 100% standard, requires no build step, and will work in any JavaScript environment. Best of all, any intermediate or better JavaScript programmer can read and understand it — including future me!

Another wonderful benefit of using simple functions for templates is that they’re directly composable. We can easily invoke the above postFragment.js template in the singlePostPage.js template using regular function call syntax.

We can also use higher-order functions like our mapObject and mapValues helpers to apply templates in the final site.js step discussed later. There we can apply the singlePostPage.js template to every post in the blog with a one-liner:

mapValues(posts, singlePostPage);

Zero dependencies

I challenged myself to create this site with zero dependencies but there were two places where I really wanted help:

  1. Converting markdown to HTML. I’d always taken for granted that one needed to use a markdown processor so I wasn’t sure what I’d do here. Most processors have a ton of options, a plugin model, etc., so they certainly feel like big tools. But at its core, the markdown format is actually straightforward by design. I found the minimalist “drawdown” processor that does the markdown-to-HTML transformation in a single file through repeated regular expression and string replacements. I copied that and ported it to modern ES modules and syntax.
  2. Rendering a JSON Feed object as RSS. This is mostly just string concatenation but I didn’t want to rewrite it by hand. I copied in an existing JSON Feed to RSS module I’d written previously.

If I weren’t pushing myself to hit zero dependencies, I would just depend on those projects. But both of them are small; using local copies of them doesn’t feel crazy to me.

Assembling the complete site as an object

In site.js we combine all the site’s resources into a single large object:

//
// This is the primary representation of the site as an object
//
export default {
  "about.html": await markdownFileToHtmlPage(relativePath("about.md")),
  assets: await files.read(relativePath("assets")),
  "feed.json": JSON.stringify(feed, null, 2),
  "feed.xml": jsonFeedToRss(feed),
  images: await files.read(relativePath("../images")),
  "index.html": pages["1.html"], // same as first page in pages area
  pages,
  posts: mapValues(posts, singlePostPage),
};

This takes each of the individual pieces of the site, like the About page, or the RSS feed, or the posts area, and combines them into a single object. That’s our whole site, defined in one place.

A tool to work with the site in the command line

Because everything in this project is just regular objects and functions, it was easy to debug. But I also made ample use of a useful tool: although this site isn’t depending on Origami, I could still use the Origami ori CLI to inspect and debug individual components from the command line.

For example, to dump the entire posts object to the command line I can write the following. (If ori isn’t globally installed, one could do npx ori instead.)

$ ori src/posts.js/

I can do this inside of a VS Code JavaScript Debug Terminal and set breakpoints too. This lets me quickly verify that individual pieces produce the expected output without having to build the whole site.

For example, while working on generating the JSON Feed, I could display just that one resource on demand:

$ ori src/site.js/feed.json

And although my intention was to build a static site, any time I wanted to check how the pages looked in the browser, I could use ori to serve the plain JavaScript object locally:

$ ori serve src/site.js

Origami happily serves and works with plain JavaScript objects, so I could use it without taking on an Origami dependency – the plain JS code that creates the site object doesn’t have to know anything about the tool being used to inspect it.

You could do the same thing, or not — whatever works for you. But using simple data representations does open up the possibility of using general-purpose tools, another reason to do things in the plainest fashion possible.

Building the static files

With all the groundwork laid above, the build process defined in build.js is trivial:

  1. Erase the existing contents of the /build folder.
  2. Load the big object from site.js that represents the entire site.
  3. Write the big object to the /build folder.

That’s it.

Note that, although this project has a “build”, that’s building the site — the project does not have a traditional “build” step that compiles the code (using TypeScript, JSX, etc.) to generate the site. If you wanted that, you could certainly do that; I don’t find it necessary.

Impressions

This was pretty fun.

This took a day’s worth of work. That was distinctly less time (half?) than it took me to write the same blog in Astro. (I’m not knocking Astro; learning any other SSG might have taken just as long.)

The bottom line is that it took me less time to write my own SSG from scratch than it did to learn, configure, and cajole someone else’s SSG into making the same blog.

I think more people who assume they need an SSG should give at least a little consideration to writing it themselves along these lines.

Big frameworks are overkill

As a simple metric, we can look at the size of the source code I wrote in both versions. We have 22K of .js files for the zero-dependency version, and 11K of .js and .astro files for the Astro version:

Chart comparing size of source code for zero dependency and Astro versions

Most of the lines of code in the Astro version can be directly mapped to a corresponding line of code in the zero-dependency version; they do the same things. The extra 11K in the zero-dependency version are what implements a bespoke static site generator from scratch. (That includes 4K for an entire markdown processor.)

Now let’s compare the size of the node_modules folder for these projects. The zero-dependency version has, by definition, zero, while the Astro version has 117Mb of node_modules.

Chart comparing node_modules size for zero dependency and Astro versions

Both projects produce identical output. The extra 11K of handwritten JavaScript in the zero-dependency version is, for the purposes of this project, functionally equivalent to the subset of the 117Mb Astro actually being used by the Astro version. Those sizes can’t be compared directly, but we’re looking at four orders of magnitude of difference in size.

What is all that Astro code doing? Astro surely has tons of features that are important to somebody — maybe you! But those features are not important to this project. Maybe they’re not important to yours, either.

The complexity in Astro does have some impact on performance. I timed some builds via time npm run build on a 2024 MacBook Air M3. The first build was always the slowest, so I threw that time away and averaged the real time of the next three builds.

Chart comparing build time for zero dependency and Astro versions

I expect the zero dependency version could be made faster, but this already looks pretty good; it’s hard to compete with plain JavaScript and zero dependencies. It’s entirely possible that Astro performs better for larger sites; recall that the zero-dependency version naively loads everything into memory, so at some point that limitation would need to be addressed. At this scale, either approach is fine, but Astro is measurably slower. Note: a 1-second build time is still good!

The point is: I think big SSG frameworks like Astro have a role to play but get used in many situations where something much simpler would suffice or may be superior.

Why not build every site this way?

Although this project didn’t require a lot of code, that 11K of extra JavaScript is generic and could be reused. It’d be reasonable to put those into a library so that similar projects could build with those pieces.

While a library may run into some of the same abstraction issues and potential for bloat as an SSG framework, a library has the critical advantage that it always leaves you in control of the action. Since a good library will do nothing unless you ask for it, in my experience it’s easier to get the results you want.

So having now written this blog three times (Origami, Astro, and plain JS with zero dependencies), I figured I may as well write it a fourth time using a library. I’ll look at that next time.

Read the other posts in this series:

  1. Static site generators like Astro are actually pretty complex for the problems they solve
  2. This minimalist static site generator pattern is only for JavaScript developers who want something small, fast, flexible, and comprehensible [this post]
  3. Making a small JavaScript blog static site generator even smaller using the general async-tree library

Static site generators like Astro are actually pretty complex for the problems they solve

I took my best shot at recreating a small blog in Astro, a popular static site generator (SSG), so I could compare it with Web Origami and other ways to build a blog.

Astro documentation page titled “Why Astro?”

Results:

First, though: I love that people love Astro! Anything that makes people more likely to create a site is fantastic. If you’re an Astro fan, you’re all set.

But if you’re shopping for a way to make a site and have heard that Astro (or any other popular site generator) is “simple”, here’s a different opinion. Note: Astro can be used for a variety of purposes, including dynamic sites, but for this project I used Astro exclusively as a static site generator.

My goal was to port my existing sample #pondlife blog to Astro. This blog reimagines Henry David Thoreau as a modern off-the-grid lifestyle influencer. The site is simple but representative of how a small personal blog might start.

Blog post titled Beans with text adapted from Thoreau's Walden

Using the original blog as a reference, I had a set of requirements for how the blog should be set up; see the Appendix. I was able to get Astro to meet most but not all of my requirements.

You can look at the final Astro blog source code and the live site.

Getting started

Given that people had described Astro as simple, I was surprised how heavy it felt.

I started with an empty project, rather than cloning a template project, so that I could understand every step. A clean install of Astro includes 100MB of node_modules.

To define the core /posts area, I created a folder structure generally following Astro guidelines, including a /src/posts/[slug].astro file that would do the work of rendering pages in that area. Using the file system in this way to sketch out the site seems reasonable and works fine.

That [slug] file name hints at magic that will turn a request for a page route into a runtime parameter that can be referenced by your code. That’s okay, I guess, although I generally prefer explicit control over magic.

One nit I had about Astro’s build process is that by default it produces noisy console output and I couldn’t find a way to just get errors. It’s a minor point, but it made the tool feel like it was prouder of itself than I thought it deserved.

Neither HTML nor JSX

The body of the [slug].astro page defined the markup for a post:

---
import allPosts from "../../posts.js";
import BaseLayout from "../../layouts/BaseLayout.astro";
import PostFragment from "../../layouts/PostFragment.astro";

export async function getStaticPaths() {
  const posts = await allPosts();
  return posts.map((postData) => ({
    params: { slug: postData.slug },
  }));
}

const { slug } = Astro.params;
const posts = await allPosts();
const post = posts.find((post) => post.slug === slug);
const nextPost = posts.find((p) => p.slug === post.nextKey);
const previousPost = posts.find((p) => p.slug === post.previousKey);
---

<BaseLayout title={post.frontmatter.title}>
  <PostFragment post={post} />
  <p>
    ... more markup here ...
  </p>
</BaseLayout>

This markup looks roughly like HTML but it’s not, it’s JSX — or, wait, it’s actually Astro’s own JSX-inspired template language. Many SSGs supply a template language; I wasn’t thrilled at having to learn a new one.

Porting the blog’s original templates to Astro template language wasn’t too much work, but as with JSX I kept getting tripped up by things in Astro that don’t work like real HTML. Case in point: JSX and Astro don’t want you to put quotes around an attribute value in cases like this:

<a href={post.slug}>

My HTML brain really wants to put quotes around that attribute value, because I keep thinking of this as a JavaScript template literal where data is inserted inside ${ } placeholders as is. Astro’s { } placeholders are tricksier than that, with some knowledge of what data is being rendered and when quotes are required.

That’s just me. Perhaps you already understand JSX and will love Astro markup.

Something that looks standard but isn’t

I’d thought of [slug].astro as a page for an individual post — but it’s also where you must write a getStaticPaths() function to tell Astro about your collection of posts. It took some trial and error for me to write that function so Astro could process all the posts in the /markdown folder.

Astro promotes a way of reading in a bunch of files via a method called import.meta.glob. That looks like a part of the web platform but it’s not — I think Astro’s underlying Vite server is hacking that in?

That hackery feels like the JavaScript global-hacking common in the late 2000s and early 2010s that the world eventually realized was a terrible idea and abandoned.

Why did they go with this fake-standard API? I assume this solution was adopted to save something like a line and a half of plain JavaScript code, which to me doesn’t seem worth it at all.

The functionality of import.meta.glob could just as easily been delivered via a regular JavaScript import. This would not only be simpler to understand, it would have allowed the solution to be used in other kinds of projects.

Content collections

Having gone through the trouble of defining the collection of posts, I was a little surprised I couldn’t find some easy way to refer to that collection elsewhere. For example, I needed to included all those posts in the RSS feed (below), but as originally written, my posts collection was only defined for the /posts route. Maybe I’m missing something?

I did eventually discover Astro’s newer content collections feature, which appears less magic and so conceptually cleaner.

That said, content collections are more complex, and I struggled to get them to work. I eventually gave up and factored my functioning import.meta.glob solution into its own file so I could just import that wherever I needed it.

When you say “never”, do you mean…

In the original blog, the posts live at URLs like /posts/slug.html but I could not get Astro to support that.

Instead, Astro really, really wants me to publish posts at /posts/slug/index.html. That URL format is a common and reasonable one — but it’s not the only format, and it’s limiting to enforce that.

I eventually discovered a configuration option trailingSlash: "never" that appeared to give me what I want. While trying Astro’s preferred RSS solution, I also had to set a separate configuration option with a confusingly different syntax, trailingSlash: false.

This was all annoying but par for the course. What was genuinely frustrating is that the trailingSlash: "never" option appears to only affect dynamic routes at runtime. The option is ignored at build time, so I still ended up with post pages like /posts/slug/index.html.

Aside: I’ve deployed this Astro blog on Netlify, which happens to have a pretty URLs feature that treats /posts/slug.html and /posts/slug/index.html as equivalent. So I get what I want with this particularly host, but I don’t like depending on host URL magic, and I don’t like the lack of control.

Complex tools like Astro make decisions for you, which can make it easier to get started but harder to get what you want. Sometimes there are configuration options; sometimes even those won’t do what you want.

Configuration oddity

Speaking of configuration, you configure Astro in an astro.config.js file like this:

// astro.config.js

import { defineConfig } from "astro/config";

export default defineConfig({
  site: "https://pondlife-astro.netlify.app",
  trailingSlash: "never",
});

What caught my attention here was the special defineConfig() method — why isn’t this file just exporting a JavaScript object?

The Astro Configuration Overview answers: “The defineConfig() helper provides automatic IntelliSense in your IDE.”

So Astro is encouraging me to do something in a proprietary way in order that, for the few minutes I’m typing in the configuration file, the editor can auto-complete the names of options. I’m already looking at the config file docs — how else am I going to really understand what these options do? — so this whole defineConfig() feature feels like it’s solving a problem I don’t have.

I tried dropping the defineConfig() call and just exporting the object, and that actually works! I wish the docs just promoted that instead.

Complying with their opinion

Astro’s Project Structure documentation says: “Astro leverages an opinionated folder layout for your project.” That opinion is part of their value proposition — they’ve worked out what they believe is a good project structure so you don’t have to spend time thinking about it.

That said, when you’re setting up a blog, you have your own reasons for wanting to put files in specific places. For example, if you’re working in an image editor and need to keep specifying an export folder, it’s nice to have the target folder of images be as close to a project’s top level as possible.

In my case, I wanted to be able to keep the post text in a top-level /markdown folder and the corresponding images in a top-level /images folder.

So when Astro said it had opinions about folder layout, I’d assumed I could override that opinion through configuration. Indeed, I was able to write code to load the posts from /markdown.

But Astro forced me to put all the static resources like images inside a /public subfolder like /public/images. I couldn’t find any way to configure around that, which was disappointing.

Couldn’t get RSS helper to work

Astro’s documentation recommends using a helper package to generate an RSS feed from a data object containing the desired posts and metadata.

That’s a great approach! (Nit: that object schema is proprietary. I’d prefer to see the data object constructed following the JSON Feed schema. That supports the same information while also being a useful feed format itself.)

I couldn’t actually get that @astrojs/rss package to work as advertised — it kept trying to decode HTML entities like < in tag names to &lt;. I tried to follow the documentation pattern as closely as possible but was still unable to resolve the problem after searching, reading docs, and reading issues.

After spending over an hour on it, I gave up and just reused a function I’d written elsewhere for generating RSS.

I assume I was just missing something simple here, so I won’t count this as an Astro issue. That said, I was surprised I couldn’t find a solution to a problem pertaining to RSS feeds, a fundamental blog feature.

Plugins

The communities around frameworks like Astro are justifiably proud of the many plugins (or “integrations” in Astro parlance) they build for their favorite tool. It’s encouraging to see so many people solving problems and sharing their solutions to help others.

But we should question the entire premise of a plugin architecture: that you should not be in control of the action. That’s a long topic that will have to wait for another time.

Covering up Node.js

Because I was using Astro with Node.js, I was stunned by this statement in the Astro Imports reference documentation:

We encourage Astro users to avoid Node.js builtins (fs, path, etc.) whenever possible. Astro is compatible with multiple runtimes using adapters. This includes Deno and Cloudflare Workers which do not support Node builtin modules such as fs.

I don’t use Cloudflare Workers so I’ll take Astro’s assertion at face value. But I’d always thought that Deno had a compatibility layer for Node.js. Indeed, Deno explicitly says you can use Node’s built-in modules in Deno. Why would Astro contradict this claim? Are there specific Deno compatibility issues?

I assume there are Astro customers who care a lot about those other runtimes — but surely that’s a minority of their users? Perhaps I’m confused about their core audience.

If I’m using Astro as an SSG to make a basic blog, I don’t care about those other runtimes. And if you are looking at Astro to make a basic blog, then very likely you don’t care about those other runtimes either.

Astro’s vision of abstracting itself on top of multiple platforms imposes a real cost in complexity. It’s also clear that they want you to only use their APIs — which will make it hard for you to migrate away from Astro. And when you eventually create a site in a different system, knowledge of Astro’s proprietary API will be useless to you.

The silly Astro toolbar

When testing my blog, I noticed an odd visual glitch at the bottom of the page:

Blog page with an unlabeled black bar at the bottom

I thought this clipped black lump was a bug. When I went to inspect it, this appeared:

Astro popup advertisement

So this is an Astro toolbar. Most of the “features” in the toolbar are links to Astro documentation and other parts of their site.

I’m really baffled by this.

Yes, the silly toolbar won’t appear in production. Yes, there’s a configuration option that can turn off this silly toolbar in development.

But the damage is done: all this silly toolbar accomplished was to make me deeply suspicious of Astro’s intentions.

Impressions

It took me the better part of two days to port this blog, which felt long. Your mileage may vary.

The things I liked about Astro:

The things I didn’t like:

My largest issue with Astro and SSGs like it is that I couldn’t easily construct a mental model of how it worked. I was looking for some overall picture that said: “Here’s the step-by-step process of what Astro does when it builds your site…” but could not find that.

That’s a big request! Going through this with Astro made me appreciate the difficulty of going through a similar process with my own project — something I hope to fix.

Is all this complexity necessary?

Although people had told me Astro is simple, I thought it was quite complex for basic sites like blogs.

Stepping back, what work is actually required to statically generate a blog site?

Taken individually, none of these tasks is that much work.

The entirety of an SSG might seem daunting, but many programmers would probably feel comfortable doing these individual tasks. And the sum of a small set of doable tasks is a doable task.

To prove that, I want to rewrite this sample blog again, this time in vanilla JavaScript with no dependencies. I predict this will take slightly more code than the Astro version but will be just as functional, more standard, and more comprehensible.

Read the other posts in this series:

  1. Static site generators like Astro are actually pretty complex for the problems they solve [this post]
  2. This minimalist static site generator pattern is only for JavaScript developers who want something small, fast, flexible, and comprehensible
  3. Making a small JavaScript blog static site generator even smaller using the general async-tree library

Appendix: Requirements

Taking the original #pondlife blog as a reference for the Astro blog, here were my requirements for the project source code (things that only matter to me as the author):

  1. The blog posts go in a top-level /markdown folder.
  2. Each markdown post has a name containing a date like 2025-07-04.md; this date should be used as the date for the post. Each post has YAML front matter containing a title property. The body of the post is markdown that should be converted to HTML.
  3. The images for the posts go in an /images folder.
  4. The site’s static assets go in /src/assets.
  5. A standard page template is used for all posts to provide consistent headers, footers, etc.
  6. The project output goes in the /build folder.

I couldn’t find a way to meet requirements #3 and #4, but was able to meet the rest of these.

And here were my requirements for the final site (things end users can see):

  1. Posts appear in reverse chronological order.
  2. The site’s /posts area offers direct links to all individual posts, with a URL like /posts/2025-07-04.html.
  3. Posts have links to older/newer posts.
  4. The site’s /pages area offers the posts grouped in sets of 10, e.g., /pages/1.html contains the first 10 posts.
  5. Those grouped pages have links to older/newer pages.
  6. The site’s /index.html home page shows the same content as /pages/1.html.
  7. The blog supports feeds in RSS and JSON Feed formats.
  8. An additional /about.html page offers information about the site using content drawn from a page at /src/about.md.

I had some trouble getting Astro to meet requirements #8 and #10: the server would accept the format I wanted but the build process wouldn’t create pages following that format.

Home science and engineering projects that my kids and I enjoyed

Djungarian dwarf hamster in a multi-level cardboard maze

When my kids were young, we did lots of science and engineering things with them to entertain them or just to alleviate boredom. We visited many science museums. We did numerous kits (e.g., Tinker Crate a.k.a KiwiCo), as well as followed published activities like the Marshmallow Challenge or recipes like homemade ginger ale (which was just okay).

But most of our favorite projects were things we made up.

These specific activities were all great fun — but the point is that you can do a lot with what you already have. The main thing is to decide to do something, spot something you can work with, and then announce that it’s time for a project.

Tip: You can make any project more interesting by giving it a distinctive name. You’re not making a crane, you’re making a Sky Crane. You’re not making a cat bed, you’re making The Circle of Comfort. Sometimes the name comes at the beginning; sometimes one of you will say something funny in the middle and you can use that.

Pull your documentation site content from your own GitHub wiki

The open source WESL (#WebGPU Shading Language) project recently launched a new WESL documentation site. While helping the group create their site, I realized they could pull their markdown content directly from the places they already keep it.

WESL project documentation site

A key question for any documentation project: how and where should the manually-authored content be stored?

The WESL project maintains two collections of markdown content aimed at different audiences:

These collections work well just as they already are, so I thought it’d be good to let the documentation site pull content from these sources using git submodules. The spec is a regular git repository, and behind the scenes a GitHub wiki is a git repository too. Submodules introduce complexities and are not for everyone, but here they let the site project access both repos locally as subfolders.

It was easy to write a program in Origami that pulls content from these subfolders, transforms the markdown to HTML, and pours the HTML into a page template.

Using git submodules means that wiki or spec updates don’t automatically appear on the documentation site; someone has to go into the site project and pull the latest wiki and spec changes. Having a manual step like that might count as an advantage or a disadvantage depending on your situation.

I was really happy with how small the source for this project ended up being. Setting aside the HTML templates, only ~200 lines of Origami and vanilla JavaScript are required to define the entire site and the client-side behavior.

      13 src/docPage.ori
      75 src/adjustMdLinks.js
      13 src/specPage.ori
      32 src/site.ori
      75 src/assets/main.js
     208 total

This is tiny. Origami is a general-purpose system for building sites; it’s not specifically a documentation site generator. This small amount of code defines a bespoke documentation system from scratch.

Using a wiki for documentation this way is really interesting! Project contributors can freely edit wiki pages using familiar tools, then have all that content turned into a static documentation site that project users can freely browse.

Writing a VS Code extension in ES modules in early 2025

VS Code is moving towards letting people write VS Code extensions directly in native ES modules but as of this writing it’s still not possible. If you are writing a new VS Code extension in early 2025, here is a way to write your extension nearly entirely in ES modules today.

I haven’t published a version of a VS Code extension that uses this technique yet, but an in-progress branch works locally and I believe this will work in production. I’m sharing this technique before shipping it because it’s clear other people are also actively searching for a solution to this problem.

This strategy leverages Node’s current support for mixing CommonJS and ES modules. You create a small CommonJS wrapper for your extension, then do all your real work in ES modules. Everything can be done in plain JavaScript (no compilation or bundling required).

CommonJS portion

ES portion

Once this is set up, you can do your real work in ES modules and generally ignore the CommonJS wrapper. When VS Code eventually supports extensions as native ES modules, migration should mostly entail deleting the CommonJS wrapper, setting "type": "module", and renaming the .mjs files to plain .js files.

I wrote a screenplay for a programming language introduction, then wrote a program to turn that into a motion comic

I’ve posted a short introduction to the Origami language in the form of a motion comic you can play in your browser:

Comic panel with the text ‘Intro to Origami’ with a bright explosion behind it in the style of classic comic book covers

Lessons from the audio/video experiment

This comic builds on last month’s experiment to automate the generation of the audio and video for a screencast in which I was searching for a better way to create video content for the Web Origami project and the Origami language.

I learned a lot in that experiment:

What I really wanted to be able to do was write a screenplay and have both the audio and video completely generated from that. I eventually concluded that it would be easier to mock a user interface (like a terminal or editor) than to drive an actual application.

Motion comics

Meanwhile I was fascinated by two UI ideas:

I decided to try to create a system that would take a screenplay as input and then output a motion comic. I loved comics as a kid and still enjoy them today. They can feel fun in a way that a tech video often does not.

One strength of a comic is that, unlike a video, the user controls the pace. It’s only a small act to scroll the page, but it feels engaging like reading, not as passive as watching a video.

Building a motion comic in HTML/CSS

One architectural principle I adopted for this was to render the initial form of the complete comic using just HTML and CSS. This not only serves the small audience that don’t or can’t use JavaScript, but also works with the grain of the web.

This static-first approach meant I could easily build the comic page in Origami itself. The main build process feeds the screenplay to a template that generates panels for each screenplay segment. A given panel might the appearance of a terminal window or show a graphic, for example.

Given the advancing state of CSS, building a page in plain HTML and CSS still requires a lot of knowledge, but things mostly work as expected. A particularly important feature for this project was using CSS scroll-snap to center the current panel on the page.

The scroll-snap feature more or less works as advertised, although I notice some slightly odd behaviors on iOS Safari. iOS Safari also has some deeply annoying behavior related to audio autoplay that make it very difficult even to let users opt into audio. These days iOS is my least favorite browser to work in.

Once I could render the basic comic, I went through and added a bit of JavaScript animation to the panels as a progressive enhancement. For now this animation mostly takes the form of typing, but it’s a start. Just as Grant Sanderson has evolved his system for programmatic math animations, this comic system can evolve in the future.

It was really fun to round out the experience with stock vector illustrations, sound effects, and gorgeous comic lettering fonts from BlamBot. As soon as I dropped in a dialogue font with ALL CAPS, the comic feel snapped into focus.

Building this mostly as plain HTML and CSS has two other important benefits:

What I really want to do is direct

I now have the basics of the system I’ve wanted: I can edit a screenplay and have that produce a (hopefully) engaging user experience with dynamic visual and audio components.

This feels more like directing than video production. With a video, I often couldn’t get a sense for how a particular line would feel until the video was finished — but unless I was really unhappy with it, it was inconceivable that I would go back and redo a line.

Being able to focus on the screenplay makes it much easier for me to step back, perceive the comic as a viewer, and spot something that can be improved. Editing the comic is as fast as editing any other text and the result of the edit can be viewed instantly.

How does it feel?

This kind of motion comic sits somewhere on a spectrum between plain text documentation and recorded video tutorials. It wouldn’t take much to move this closer to regular text documentation, or push it further to the other end and render all the animated frames to video.

I’m pretty happy with this as it is, but if you go through the comic and have thoughts, I’d love to hear them.

2024 was a good year — Web Origami year end project report

It’s always useful for me at the end of the year to reflect back on the past year’s work. I think this has been a great year for the Web Origami project and the Origami language.

Goals for 2024

At the start of the year I set some specific goals for the project, all in service of building awareness of the project. These were all in addition to the regular investments in the Origami language, runtime, builtins, etc.

Goal 1: Create a small but realistic sample application every month

I kept this up for six months, producing the set of apps on the Examples page:

I’m quite happy with this set, and I think they’ve been helpful in illustrating some of what Origami can do.

Halfway through the year I felt like I’d reached the point of diminishing returns; adding one more to the set isn’t going to be the thing that tips the balance for a newcomer. And going forward actual user sites will also be good examples for others to follow.

Goal 2: Daily efforts to promote Web Origami

Marketing doesn’t come naturally to me, so I tried to make myself spend substantial time doing it. I wanted reach out to at least one person each work day with an email, social media post, blog post, etc.

I was only able to keep this up for a few months before getting exhausted. As it turned out, that might have been enough anyway.

Goal 3: Pitch Web Origami presentations to three conferences

I did this — but none of the conference accepted my talk proposals.

Submitting a conference proposal is real work, and I’ve come to believe that it’s a waste of my time.

The one conference I particularly wanted to speak at was the StrangeLoop conference. Sadly, in January 2024 I looked for their CFP and was crushed to learn that 2023 had been the final year of the conference.

I would love to present Origami at a conference at some point but can’t afford to waste more time on talk proposals that will just get rejected. I’ll only invest the time to prepare a talk if invited to do so.

Feature work

I am incredibly fortunate to be able to work on the Origami language full time. I was able to invest in a long list of new or improved features for the language. Most of the investments I made in the second half of the year were based on user feedback.

User sites and community

By far the most exciting news this year was that people began using Origami to make sites with it. At the beginning of the year I was the only one with Origami sites in production; now there are a couple of user sites and a few more are in development.

These early adopters provide invaluable feedback on what kinds of sites real people want to make, and whether Origami makes it easy for them to make those kinds of sites.

I’m looking forward in 2025 to fostering the community of Origami users and directing substantial investments in the project based on their feedback.

Automating generation of the audio and video for an Origami intro screencast

I’ve posted a new Origami intro screencast that covers some of the basics of the Origami language:

This screencast doesn’t give a complete introduction yet, but I think the production process I’m using is itself interesting and worth sharing.

Videos are a vital form of documentation but:

Goal

I was inspired by how Grant Sanderson produces his 3Blue1Brown math animations programmatically. Grant controls the graphics, the camera movements, highlights, text — everything — in code. This makes it easier for him to iterate on the story he’s telling by rewriting the code.

What I really want is to be able to focus on the story I want to tell — to be a screenwriter. I want to write a screencast script with stage directions (“Click on index.html”) and dialogue (“Alice: This index.html file contains…”). I’d love to be able to generate a decent screencast video directly from that.

Process

I can’t find anything that lets me do what I want, so for now I’m trying to automate generation of the video and audio separately.

I’m trying a scriptable mouse/keyboard desktop automation product called Keyboard Maestro. I can write keyboard/mouse macros for specific common actions (open the VS Code terminal, select a file, etc.), then assemble these into an overall macro for the screencast.

This is very clunky for this purpose; I really wish some product like this offered a real programming language. In any event, I play the macro while recording the screen to get the video portion of the screencast.

I’m using Origami itself to map a script of dialogue to generated voice files via OpenAI. Having two alternating voices seems to reduce the fatigue of listening to generated text-to-speech.

The script is a YAML file with lines of dialogue for each of the “actors”:

- echo: >
    To illustrate the basic ideas in their plainest form, let's start by writing
    some expressions in Origami using the command-line interface called ori. If
    I type "ori 1 plus 1", it evaluates that and displays 2.
- echo: >
    If I type "ori hello”, oree displays hello. In the shell, you'll need to
    escape quotes or surround them with extra quotes because the shell itself
    consumes quote marks.
- shimmer: >
    In addition to basic numbers and strings, you can reference files. Think of
    each file as if it were a spreadsheet cell. Instead of the A1, B2 style cell
    references in a spreadsheet, we can use paths and file names to refer to
    things. Unlike most programming languages, names in Origami can include
    characters like periods and hyphens.

This produces the audio portion of the screencast. I then use Camtasia to merge the audio and video to create the final screencast file. This requires a fair bit of work to position the audio clips in relation to the video, and in many places to add delays in the video to give the audio time to play.

Lessons

This editing process still takes time but the recording of the audio and video took very little time compared to previous screencasts. And if Origami changes, it’s feasible to tweak the demo macro or dialogue, rerecord those, and splice those into the screencast.

I posted this video on the Origami discussion board to get feedback on a draft screencast. I used that feedback to refine the audio and video and then spliced in the updated parts. Being able to iterate on a screencast is fantastic.

One unpleasant surprise: After a second round of feedback, I went to rerecord the video — and discovered that VS Code had made changes to its window chrome! Uh, rats. That means that new video clips can’t be used alongside old ones, which means having to reposition all the audio in relation to the new video. That’s a real time sink.

Future

I hope this approach pans out so that I can make more useful screencasts that can stay relevant for a longer time. Towards that end, I’d love to be able to:

If you’re familiar with tools that can do any of those things, please let me know!

Looking for people to playtest a programming language for making websites

Source code in the Origami programming language for a basic blog

I’d love to find a few new people to try out the Origami programming language for creating websites — maybe you?

Maybe you have any of these goals:

and you:

The Origami programming language complements HTML and CSS to let you define the structure of a site. You write formulas or expressions at roughly the level of complexity of spreadsheet formulas. These fully determine the site you get; nothing happens unless you ask for it.

The language is concise and powerful. The above screenshot shows the code for a sample influencer lifestyle blog. Those ~15 lines of code establish the basic site structure, and another 20 lines in other files prepare the raw content and produce the blog feed. That’s all that’s required to define a blog engine completely from scratch. For comparison, a typical blog engine might require much more code in its configuration file alone and be much harder to reason about.

A playtest is generally done as a video call. You outline your goals and then either go through an Origami tutorial or we work together to start something from scratch. At this stage of the language’s evolution, direct observation is extremely helpful. Some people find that prospect intimidating, but this is a playtest! We will test the language — no one will be testing you.

The Origami documentation is complete enough that a motivated person could potentially get up and running on their own, but the onboarding won’t be as easy as with an already well-established, mature language.

If interested, please contact me!

Older posts