Using the async-tree library substantially cuts down the source code for a minimalist static site generator (SSG) in JavaScript, at a very modest cost in dependencies. The result is still fast and flexible.
The zero-dependency version felt quite good, although insisting on no dependencies was a little extreme.
While half the source code was unique to the project, the features in the other half can be cleanly handled by libraries, like:
Transforming markdown to HTML. Markdown processing can be expressed as a pure function that accepts markdown and returns HTML. A processor like marked fits the bill.
These are both pure functions, a much easier kind of dependency to take on. You decide when to call the function and what input to give it; it gives you back a result without any side effects. This contract greatly reduces the potential for surprise or frustration.
The async-tree library
The remaining sharable code in the zero-dependency version comprises generic utility functions:
A higher-order function that maps the keys and values of an object to return a new, transformed object
A way to read and write a file system folder tree as an object
Since these are completely generic, they’re worth sharing — so over the past 4 years I’ve been working on a library called async-tree that handles these and other tasks.
The async-tree library builds on the idea that most of the hierarchical structures we work with can be abstracted to asynchronous trees. When creating a site, we rarely care about how data is stored; we just want to render it into static resources like HTML.
Our collection of markdown documents, for example, is physically stored in the file system — but that’s irrelevant to our static site generator. All we care about are the keys (the file names) and the values (the markdown text with front matter). We can think about this collection of markdown documents as an abstract tree that could be anywhere in memory, on disk, or in the cloud:
If all we want to do is traverse this tree, APIs like Node’s fs API are overkill. We just want a way of getting keys and values. This is much closer in spirit to a JavaScript Map. Unlike Map, we can handle more cases by making our methods async.
This is an interface (not a class) that’s easy to define for any almost any collection-like data structure. Such async collections can be nested to form an async tree — a tree of promises.
Abstractions come a cost. In exchange for a considerable degree of power and flexibility, you have to wrap your brain around an unfamiliar concept. “A tree of promises?” It might take a while to wrap your head around that.
I will say that, from several years of experience, it’s ultimately very beneficial to view software problems like static site generation as reading, transforming, and writing async trees.
Example: reading markdown, reading posts
As an example, to get the first file from our markdown folder, we can construct an AsyncTree for that folder using the library’s FileTree helper, then call the tree’s get method:
Here FileTree is roughly similar to our quick-and-dirty zero-dependency code that read a folder tree into memory. But FileTree is more efficient because it doesn’t read the complete set of files into memory; it only does work when you look up a key’s value with get.
Our posts.js function turns that collection of markdown file buffers into a completely different form: a set of plain JavaScript objects with .html names that are stored in memory. Despite these significant differences, if we want to get the first post from that collection, we can still use the same get method:
import posts from"./src/posts.js";
const first = await posts.get("2025-07-04.html");
Totally different data structure, same get method.
Example: pagination
Another reason to work with collections as abstract trees is that a consistent set of operations can be defined for them regardless of their underlying storage representations.
For example, the zero-dependency version includes a one-off paginate helper that accepts a collection of posts and returns an array grouping the posts into sets of 10. The paginated posts can then be mapped to HTML pages using the project’s own mapObject helper function.
// Group posts into pages of 10const pages = mapObject(paginate(posts, 10), (paginated, index) => [
`${parseInt(index) + 1}.html`, // Change names to `1.html`, `2.html`, ...multiPostPage(paginated), // Apply template to the set of 10 posts
]);
The async-tree library offers the same functionality as a general paginate function which can be applied to a tree defined by any means, including our set of posts. The paginated results can then be turned into HTML with another generic tree operation, map.
// Group posts into pages of 10const pages = map(awaitpaginate(posts, 10), {
extension: "->.html", // Add `.html` to the numeric keysvalue: multiPostPage, // Apply template to the set of 10 posts
});
Mapping the values of a collection often implies changing the file extension on the corresponding keys, so the map function includes an extension option to easily add, change, or remove extensions.
Site definition
As with the zero-dependency version, the async-tree version of the blog defines the overall structure of the site in extremely concise fashion in site.js:
// Group posts into pages of 10const pages = map(awaitpaginate(posts, 10), {
extension: "->.html", // Add `.html` to the numeric keysvalue: multiPostPage, // Apply template to the set of 10 posts
});
// Convert posts to a feed object in JSON Feed schemaconst feed = awaitjsonFeed(posts);
//// This is the primary representation of the site as an object. Some properties// are async promises for a single result, others are async trees of promises.//exportdefault {
"about.html": aboutPage(),
assets: newFileTree(newURL("assets", import.meta.url)),
images: newFileTree(newURL("../images", import.meta.url)),
"index.html": pages.get("1.html"), // same as first page in pages area"feed.json": JSON.stringify(feed, null, 2),
"feed.xml": jsonFeedToRss(feed),
pages,
posts: map(posts, singlePostPage),
};
That’s the whole site. This is the most concise way I know to define a site in JavaScript.
I find this kind of concise overview invaluable when I return to a project after a long break, and a quick glance refreshes my understanding of the site’s structure.
Build
Once the site is defined, building the site is just a matter of copying files from the virtual world to the real world. Here’s the whole build.js script:
import { FileTree, Tree } from"@weborigami/async-tree";
import site from"./site.js";
// Build process writes the site resources to the build folderconst buildTree = newFileTree(newURL("../build", import.meta.url).pathname);
awaitTree.clear(buildTree); // Erase any existing filesawaitTree.assign(buildTree, site); // Copy site to build folder
The async-tree library provides a set of helpers in a static class called Tree. These provide a full set of operations like those in the JavaScript Map class so that AsyncTree interface implementors don’t have to define those methods themselves, making it easier to create new AsyncTree implementations to read data directly out of new data sources.
Assessment
We can compare this async-tree version of the blog with the earlier Astro and zero-dependency versions. All three versions create the same site.
The async-tree version makes strategic use of libraries for markdown processing, RSS feed generation, and manipulating objects and files as trees. This removes over half the code from the zero-dependency version, so async-tree has only 9K handwritten source code, the smallest of the three:
This comes at a modest cost of 1.5Mb of node_modules, or about 1% of the 117Mb of node_modules for the Astro version:
The async-tree version is still extremely fast, just a hair slower than the zero-dependency version:
Nice!
Impressions
Like the zero-dependency version, this async-tree version was fun to write.
The introduction of a limited set of dependencies to this project felt fine. The small libraries I’m using here all do their work as pure functions, so I’m still in control over what’s going on. I don’t have to wrestle with plugins, lifecycle methods, or complex configuration like I would have to in a mainstream SSG framework. I’m just calling functions!
Debugging async JavaScript code is harder than debugging regular, synchronous code. The debugger I use in VS Code does a fairly good job of it, but it’s still not possible to inspect the value of variables across async stack frames. That can make it harder to figure out what’s gone wrong at a breakpoint.
That said, I once again made good use of the ori CLI to check various pieces of the site in the command line. That let me confirm that individual pieces worked as expected, as well as serve the site locally to inspect the evolving site.
All in all, I think this async-tree approach is a really interesting way to build sites. It’s significantly less JavaScript than the zero-dependency version, while it’s still very fast and light on package weight. You stay in control.
Since I wrote the async-tree library, I can’t provide an objective assessment of how difficult it is to use.
The library deserves more comprehensive documentation than it currently has; I’ve generally focused my documentation writing on the higher-level Origami language and its set of builtins. If you’re intrigued by this more foundational, general-purpose async-tree library, let me know. I can help you out and prioritize documenting it in more detail.
Improvable?
As small and focused as the source for this async-tree version is, it can be made even smaller! Next time I’ll revisit the original sample blog that started this post series and show the benefits of writing it in Origami.
Configuring a complex tool can take more work that just coding the functionality you want from scratch. In the last post I described creating a simple blog in Astro, a popular static site generator (SSG). The Astro solution felt more complicated than the problem justified, so I rewrote the entire blog project from scratch in pure JavaScript with zero dependencies.
This went very well! I coded the blog in about a day, I can completely understand every part of it, and it’s very fast. Writing from scratch made it easy to achieve all of the requirements for the site (described in the appendix of the linked post above).
This isn’t a product but a pattern. If you’re familiar with JavaScript, there are only two small ideas here you might not have tried before. I think you’ll find it easier than you expect. I used JavaScript but you could just as easily do this in Python or any other language.
A static site generator reads in a tree of files representing the source content you create by hand and transforms it into a new tree of files representing the static files you deploy. That’s the core of what an SSG does.
To that end, an SSG also helps you with a variety of conventions about how the content is written or what form the resulting static files should take. For a blog, those conventions include:
Letting you write posts in markdown with hardcoded and calculated metadata
Converting markdown to HTML
Applying templates to data and HTML fragments to create a consistent set of final pages
Generating feeds in formats like RSS
Handling one-off markdown pages like the About page
Linking pages together
Individually, each of those transformations is straightforward.
To write this SSG from scratch, we’ll need a way to represent a site overall, a way to read and write content, and a way to specify all those small transformations.
Plain objects and functions are all you need
A useful general principle in coding is to see how far you can get with plain objects and functions. (What JavaScript calls plain objects, Python calls dictionaries and other languages might call associative arrays.) When possible, functions should be pure — that is, have no side effects.
Applying this principle to writing a static site generator:
Read the folders of markdown posts and static assets into plain objects.
Use a sequence of pure functions to transform the posts object into new objects that are closer and closer to the form we want.
Create additional objects for paginated posts, the feeds, and the About page.
Put everything together into a single object representing the site’s entire tree of resources.
Write the site object out to the build folder.
Idea 1: Treat a file tree as an object
Both a tree of files and a plain object are hierarchical, so we can use a plain object to represent a complete set of files in memory. The keys of the object will be the file names, and the values will be the contents of the files. For very large sites keeping everything in memory could an issue, but at the scale of a personal blog it’s generally fine.
If you’ve ever worked with Node’s fs file system API, then recursively reading a tree of files into an object is not a difficult task. The same goes for writing a plain object out to the file system. If you aren’t familiar with fs but are comfortable using AI, this is the sort of code that AI is generally very good at writing.
You can read my handwritten solution at files.js. You could just copy that.
Idea 2: Map objects
Once we have a bunch of files represented as a plain object, we next want some way to easily create new objects in which the files have been transformed.
The JavaScript Array class has a workhorse map function that lets you concisely apply a function to every item an array. Sadly the JavaScript Object class is missing a corresponding function to map the keys and values of an object — but we can create an object-mapping function ourselves:
// Create a new object by applying a function to each [key, value] pairexportfunctionmapObject(object, fn) {
// Get the object's [key, value] pairsconst entries = Object.entries(object);
// Map each entry to a new [key, value] pairconst mappedEntries = entries.map(([key, value]) =>fn(value, key, object));
// Create a new object from the mapped entriesreturnObject.fromEntries(mappedEntries);
}
This little helper forms the core of our transformation work. Since we’re treating a set of files as an object, we can use this helper to transform a set of one kind of file to a set of a different kind of file, renaming the files as necessary.
We will also often want to map just the values of an object while keeping the keys the same, so a related mapValues helper handles that common case.
Preparing the data for rendering
I find it useful to consolidate the work required to read in a site’s source content and prepare it for rendering in a single module. This does all the calculations and transformations necessary to get the content in a form that can be easily rendered to HTML, feeds, and other forms.
This project does that work in posts.js, which exports a plain object with all the posts data ready for render. We can call that a module a “pipeline”; it’s just a series of function calls.
The pipeline starts by using our files helper to read in all the posts in the /markdown folder into an object. The object’s keys are the file names; the values are Buffer objects containing the file’s contents. If we were to render the in-memory object in YAML it would look like:
We now begin a series of transformations using the aforementioned mapObject and mapValues helpers. The first transformation interprets the Buffer as markdown text with a title and body properties. This step also parses the date property from the file name and adds that. The result is that our collection of posts now looks like:
The next step is to turn the markdown in the body properties to HTML. Since the data type is now changing, we can reflect that by changing the file extensions from .md to .html. Result:
We’d like the page for an individual post to have links to the pages for the next and previous posts, so the next step calls a helper to add nextKey and previousKey properties to the post data:
Because the original markdown files have names that start with a date in YYYY-MM-DD format, by default the posts will be in chronological order. We’d like to display the posts in reverse chronological order, so the final step of the pipeline reverses the orders of entries in the top-level object. The posts that were at the beginning will now be at the end of the data:
This is the form of the final object exported by posts.js. It contains all the data necessary to render the posts in various formats.
These steps could all be merged into a single pass but, to me, doing the transformations in separate steps makes this easier to reason about, inspect, and debug. It also means that transformations like adding next/previous links are independent and can be repurposed for other projects.
Template literals are great, actually
Most static site generators come with one or more template languages. For example, here’s the PostFragment.astro template from the Astro version of this blog. It converts a blog post to an HTML fragment:
---
// A single blog post, on its own or in a list
const { post } = Astro.props;
---
<section>
<a href={`/posts/${post.slug}`}>
<h2>{post.frontmatter.title}</h2>
</a>
{
post.date.toLocaleDateString("en-US", {
year: "numeric",
month: "long",
day: "numeric",
})
}
<post.Content />
</section>
This isn’t that bad, although it’s an odd combination of embedded JavaScript and quasi-HTML.
If you’re a JavaScript programmer, you can just use standard JavaScript with template literals to do the exact same thing. Here’s the equivalent postFragment.js function from the zero dependency version:
// A single blog post, on its own or in a listexportdefault (post, key) => `
<section>
<a href="/posts/${key}">
<h2>${post.title}</h2>
</a>
${post.date.toLocaleDateString("en-US", {
year: "numeric",
month: "long",
day: "numeric",
})}${post.body}
</section>
`;
It’s a matter of taste, but I think the plain JS version is as easy to read. It’s also 100% standard, requires no build step, and will work in any JavaScript environment. Best of all, any intermediate or better JavaScript programmer can read and understand it — including future me!
Another wonderful benefit of using simple functions for templates is that they’re directly composable. We can easily invoke the above postFragment.js template in the singlePostPage.js template using regular function call syntax.
We can also use higher-order functions like our mapObject and mapValues helpers to apply templates in the final site.js step discussed later. There we can apply the singlePostPage.js template to every post in the blog with a one-liner:
mapValues(posts, singlePostPage);
Zero dependencies
I challenged myself to create this site with zero dependencies but there were two places where I really wanted help:
Converting markdown to HTML. I’d always taken for granted that one needed to use a markdown processor so I wasn’t sure what I’d do here. Most processors have a ton of options, a plugin model, etc., so they certainly feel like big tools. But at its core, the markdown format is actually straightforward by design. I found the minimalist “drawdown” processor that does the markdown-to-HTML transformation in a single file through repeated regular expression and string replacements. I copied that and ported it to modern ES modules and syntax.
Rendering a JSON Feed object as RSS. This is mostly just string concatenation but I didn’t want to rewrite it by hand. I copied in an existing JSON Feed to RSS module I’d written previously.
If I weren’t pushing myself to hit zero dependencies, I would just depend on those projects. But both of them are small; using local copies of them doesn’t feel crazy to me.
Assembling the complete site as an object
In site.js we combine all the site’s resources into a single large object:
//// This is the primary representation of the site as an object//exportdefault {
"about.html": awaitmarkdownFileToHtmlPage(relativePath("about.md")),
assets: await files.read(relativePath("assets")),
"feed.json": JSON.stringify(feed, null, 2),
"feed.xml": jsonFeedToRss(feed),
images: await files.read(relativePath("../images")),
"index.html": pages["1.html"], // same as first page in pages area
pages,
posts: mapValues(posts, singlePostPage),
};
This takes each of the individual pieces of the site, like the About page, or the RSS feed, or the posts area, and combines them into a single object. That’s our whole site, defined in one place.
A tool to work with the site in the command line
Because everything in this project is just regular objects and functions, it was easy to debug. But I also made ample use of a useful tool: although this site isn’t depending on Origami, I could still use the Origami ori CLI to inspect and debug individual components from the command line.
For example, to dump the entire posts object to the command line I can write the following. (If ori isn’t globally installed, one could do npx ori instead.)
$ ori src/posts.js/
I can do this inside of a VS Code JavaScript Debug Terminal and set breakpoints too. This lets me quickly verify that individual pieces produce the expected output without having to build the whole site.
For example, while working on generating the JSON Feed, I could display just that one resource on demand:
$ ori src/site.js/feed.json
And although my intention was to build a static site, any time I wanted to check how the pages looked in the browser, I could use ori to serve the plain JavaScript object locally:
$ ori serve src/site.js
Origami happily serves and works with plain JavaScript objects, so I could use it without taking on an Origami dependency – the plain JS code that creates the site object doesn’t have to know anything about the tool being used to inspect it.
You could do the same thing, or not — whatever works for you. But using simple data representations does open up the possibility of using general-purpose tools, another reason to do things in the plainest fashion possible.
Building the static files
With all the groundwork laid above, the build process defined in build.js is trivial:
Erase the existing contents of the /build folder.
Load the big object from site.js that represents the entire site.
Write the big object to the /build folder.
That’s it.
Note that, although this project has a “build”, that’s building the site — the project does not have a traditional “build” step that compiles the code (using TypeScript, JSX, etc.) to generate the site. If you wanted that, you could certainly do that; I don’t find it necessary.
Impressions
This was pretty fun.
It was easy to keep the entire process in my head, so I made steady progress the whole time. I don’t think I hit a single real roadblock or had to backtrack.
Of course there were little bugs, but because I was working with plain objects and functions, the bugs were easy to locate, diagnose, and fix.
There were a very few cases where I had to look up anything. In checking the Node.js docs, I did learn about the fs.rm() function, a call I’d somehow overlooked before which removes both files and folders. I’ll now be able to apply that new knowledge in future projects instead of having invested in a niche API I might never use again.
Since I was in complete control over the program, there was no point where I had to struggle with someone else’s opinion.
This took a day’s worth of work. That was distinctly less time (half?) than it took me to write the same blog in Astro. (I’m not knocking Astro; learning any other SSG might have taken just as long.)
The bottom line is that it took me less time to write my own SSG from scratch than it did to learn, configure, and cajole someone else’s SSG into making the same blog.
I think more people who assume they need an SSG should give at least a little consideration to writing it themselves along these lines.
Big frameworks are overkill
As a simple metric, we can look at the size of the source code I wrote in both versions. We have 22K of .js files for the zero-dependency version, and 11K of .js and .astro files for the Astro version:
Most of the lines of code in the Astro version can be directly mapped to a corresponding line of code in the zero-dependency version; they do the same things. The extra 11K in the zero-dependency version are what implements a bespoke static site generator from scratch. (That includes 4K for an entire markdown processor.)
Now let’s compare the size of the node_modules folder for these projects. The zero-dependency version has, by definition, zero, while the Astro version has 117Mb of node_modules.
Both projects produce identical output. The extra 11K of handwritten JavaScript in the zero-dependency version is, for the purposes of this project, functionally equivalent to the subset of the 117Mb Astro actually being used by the Astro version. Those sizes can’t be compared directly, but we’re looking at four orders of magnitude of difference in size.
What is all that Astro code doing? Astro surely has tons of features that are important to somebody — maybe you! But those features are not important to this project. Maybe they’re not important to yours, either.
The complexity in Astro does have some impact on performance. I timed some builds via time npm run build on a 2024 MacBook Air M3. The first build was always the slowest, so I threw that time away and averaged the real time of the next three builds.
I expect the zero dependency version could be made faster, but this already looks pretty good; it’s hard to compete with plain JavaScript and zero dependencies. It’s entirely possible that Astro performs better for larger sites; recall that the zero-dependency version naively loads everything into memory, so at some point that limitation would need to be addressed. At this scale, either approach is fine, but Astro is measurably slower. Note: a 1-second build time is still good!
The point is: I think big SSG frameworks like Astro have a role to play but get used in many situations where something much simpler would suffice or may be superior.
Why not build every site this way?
Although this project didn’t require a lot of code, that 11K of extra JavaScript is generic and could be reused. It’d be reasonable to put those into a library so that similar projects could build with those pieces.
While a library may run into some of the same abstraction issues and potential for bloat as an SSG framework, a library has the critical advantage that it always leaves you in control of the action. Since a good library will do nothing unless you ask for it, in my experience it’s easier to get the results you want.
So having now written this blog three times (Origami, Astro, and plain JS with zero dependencies), I figured I may as well write it a fourth time using a library. I’ll look at that next time.
This minimalist static site generator pattern is only for JavaScript developers who want something small, fast, flexible, and comprehensible [this post]
I took my best shot at recreating a small blog in Astro, a popular static site generator (SSG), so I could compare it with Web Origami and other ways to build a blog.
Results:
I was able to port the blog to Astro, although the port took the better part of two days. This was my first Astro project, but it was more work than I’d expected.
Astro imposed some constraints that forced me to deviate from how I wanted to make my site.
Like many SSGs, Astro covers up parts of the web and Node.js with proprietary languages and abstractions. I find Astro’s replacements more complex than the foundation it covers up.
I came away from the experience with a sense of why people like Astro — but also a feeling that most SSGs are overpowered for the problems most bloggers are trying to solve.
First, though: I love that people love Astro! Anything that makes people more likely to create a site is fantastic. If you’re an Astro fan, you’re all set.
But if you’re shopping for a way to make a site and have heard that Astro (or any other popular site generator) is “simple”, here’s a different opinion. Note: Astro can be used for a variety of purposes, including dynamic sites, but for this project I used Astro exclusively as a static site generator.
My goal was to port my existing sample #pondlife blog to Astro. This blog reimagines Henry David Thoreau as a modern off-the-grid lifestyle influencer. The site is simple but representative of how a small personal blog might start.
Using the original blog as a reference, I had a set of requirements for how the blog should be set up; see the Appendix. I was able to get Astro to meet most but not all of my requirements.
Given that people had described Astro as simple, I was surprised how heavy it felt.
I started with an empty project, rather than cloning a template project, so that I could understand every step. A clean install of Astro includes 100MB of node_modules.
To define the core /posts area, I created a folder structure generally following Astro guidelines, including a /src/posts/[slug].astro file that would do the work of rendering pages in that area. Using the file system in this way to sketch out the site seems reasonable and works fine.
That [slug] file name hints at magic that will turn a request for a page route into a runtime parameter that can be referenced by your code. That’s okay, I guess, although I generally prefer explicit control over magic.
One nit I had about Astro’s build process is that by default it produces noisy console output and I couldn’t find a way to just get errors. It’s a minor point, but it made the tool feel like it was prouder of itself than I thought it deserved.
Neither HTML nor JSX
The body of the [slug].astro page defined the markup for a post:
This markup looks roughly like HTML but it’s not, it’s JSX — or, wait, it’s actually Astro’s own JSX-inspired template language. Many SSGs supply a template language; I wasn’t thrilled at having to learn a new one.
Porting the blog’s original templates to Astro template language wasn’t too much work, but as with JSX I kept getting tripped up by things in Astro that don’t work like real HTML. Case in point: JSX and Astro don’t want you to put quotes around an attribute value in cases like this:
<a href={post.slug}>
My HTML brain really wants to put quotes around that attribute value, because I keep thinking of this as a JavaScript template literal where data is inserted inside ${ } placeholders as is. Astro’s { } placeholders are tricksier than that, with some knowledge of what data is being rendered and when quotes are required.
That’s just me. Perhaps you already understand JSX and will love Astro markup.
Something that looks standard but isn’t
I’d thought of [slug].astro as a page for an individual post — but it’s also where you must write a getStaticPaths() function to tell Astro about your collection of posts. It took some trial and error for me to write that function so Astro could process all the posts in the /markdown folder.
Astro promotes a way of reading in a bunch of files via a method called import.meta.glob. That looks like a part of the web platform but it’s not — I think Astro’s underlying Vite server is hacking that in?
That hackery feels like the JavaScript global-hacking common in the late 2000s and early 2010s that the world eventually realized was a terrible idea and abandoned.
You might think you can go to the import.meta documentation to understand what .glob does. Nope, this is bespoke.
Imagine this .glob idea became wildly successful and someday we wanted to make it part of the actual web. History shows the standard version of the proprietary idea will be different and not backward-compatible — so the standard thing will definitely not be called import.meta.glob! Doing that would break all the existing Astro sites. So in trying to make something look standard, Astro/Vite has prevented it from ever becoming the actual standard.
Even if you like this API, you can’t use it anywhere but an Astro (or Vite?) project.
Why did they go with this fake-standard API? I assume this solution was adopted to save something like a line and a half of plain JavaScript code, which to me doesn’t seem worth it at all.
The functionality of import.meta.glob could just as easily been delivered via a regular JavaScript import. This would not only be simpler to understand, it would have allowed the solution to be used in other kinds of projects.
Content collections
Having gone through the trouble of defining the collection of posts, I was a little surprised I couldn’t find some easy way to refer to that collection elsewhere. For example, I needed to included all those posts in the RSS feed (below), but as originally written, my posts collection was only defined for the /posts route. Maybe I’m missing something?
I did eventually discover Astro’s newer content collections feature, which appears less magic and so conceptually cleaner.
That said, content collections are more complex, and I struggled to get them to work. I eventually gave up and factored my functioning import.meta.glob solution into its own file so I could just import that wherever I needed it.
When you say “never”, do you mean…
In the original blog, the posts live at URLs like /posts/slug.html but I could not get Astro to support that.
Instead, Astro really, really wants me to publish posts at /posts/slug/index.html. That URL format is a common and reasonable one — but it’s not the only format, and it’s limiting to enforce that.
I eventually discovered a configuration option trailingSlash: "never" that appeared to give me what I want. While trying Astro’s preferred RSS solution, I also had to set a separate configuration option with a confusingly different syntax, trailingSlash: false.
This was all annoying but par for the course. What was genuinely frustrating is that the trailingSlash: "never" option appears to only affect dynamic routes at runtime. The option is ignored at build time, so I still ended up with post pages like /posts/slug/index.html.
Aside: I’ve deployed this Astro blog on Netlify, which happens to have a pretty URLs feature that treats /posts/slug.html and /posts/slug/index.html as equivalent. So I get what I want with this particularly host, but I don’t like depending on host URL magic, and I don’t like the lack of control.
Complex tools like Astro make decisions for you, which can make it easier to get started but harder to get what you want. Sometimes there are configuration options; sometimes even those won’t do what you want.
Configuration oddity
Speaking of configuration, you configure Astro in an astro.config.js file like this:
What caught my attention here was the special defineConfig() method — why isn’t this file just exporting a JavaScript object?
The Astro Configuration Overview answers: “The defineConfig() helper provides automatic IntelliSense in your IDE.”
So Astro is encouraging me to do something in a proprietary way in order that, for the few minutes I’m typing in the configuration file, the editor can auto-complete the names of options. I’m already looking at the config file docs — how else am I going to really understand what these options do? — so this whole defineConfig() feature feels like it’s solving a problem I don’t have.
I tried dropping the defineConfig() call and just exporting the object, and that actually works! I wish the docs just promoted that instead.
Complying with their opinion
Astro’s Project Structure documentation says: “Astro leverages an opinionated folder layout for your project.” That opinion is part of their value proposition — they’ve worked out what they believe is a good project structure so you don’t have to spend time thinking about it.
That said, when you’re setting up a blog, you have your own reasons for wanting to put files in specific places. For example, if you’re working in an image editor and need to keep specifying an export folder, it’s nice to have the target folder of images be as close to a project’s top level as possible.
In my case, I wanted to be able to keep the post text in a top-level /markdown folder and the corresponding images in a top-level /images folder.
So when Astro said it had opinions about folder layout, I’d assumed I could override that opinion through configuration. Indeed, I was able to write code to load the posts from /markdown.
But Astro forced me to put all the static resources like images inside a /public subfolder like /public/images. I couldn’t find any way to configure around that, which was disappointing.
That’s a great approach! (Nit: that object schema is proprietary. I’d prefer to see the data object constructed following the JSON Feed schema. That supports the same information while also being a useful feed format itself.)
I couldn’t actually get that @astrojs/rss package to work as advertised — it kept trying to decode HTML entities like < in tag names to <. I tried to follow the documentation pattern as closely as possible but was still unable to resolve the problem after searching, reading docs, and reading issues.
After spending over an hour on it, I gave up and just reused a function I’d written elsewhere for generating RSS.
I assume I was just missing something simple here, so I won’t count this as an Astro issue. That said, I was surprised I couldn’t find a solution to a problem pertaining to RSS feeds, a fundamental blog feature.
Plugins
The communities around frameworks like Astro are justifiably proud of the many plugins (or “integrations” in Astro parlance) they build for their favorite tool. It’s encouraging to see so many people solving problems and sharing their solutions to help others.
But we should question the entire premise of a plugin architecture: that you should not be in control of the action. That’s a long topic that will have to wait for another time.
Covering up Node.js
Because I was using Astro with Node.js, I was stunned by this statement in the Astro Imports reference documentation:
We encourage Astro users to avoid Node.js builtins (fs, path, etc.) whenever possible. Astro is compatible with multiple runtimes using adapters. This includes Deno and Cloudflare Workers which do not support Node builtin modules such as fs.
I don’t use Cloudflare Workers so I’ll take Astro’s assertion at face value. But I’d always thought that Deno had a compatibility layer for Node.js. Indeed, Deno explicitly says you can use Node’s built-in modules in Deno. Why would Astro contradict this claim? Are there specific Deno compatibility issues?
I assume there are Astro customers who care a lot about those other runtimes — but surely that’s a minority of their users? Perhaps I’m confused about their core audience.
If I’m using Astro as an SSG to make a basic blog, I don’t care about those other runtimes. And if you are looking at Astro to make a basic blog, then very likely you don’t care about those other runtimes either.
Astro’s vision of abstracting itself on top of multiple platforms imposes a real cost in complexity. It’s also clear that they want you to only use their APIs — which will make it hard for you to migrate away from Astro. And when you eventually create a site in a different system, knowledge of Astro’s proprietary API will be useless to you.
The silly Astro toolbar
When testing my blog, I noticed an odd visual glitch at the bottom of the page:
I thought this clipped black lump was a bug. When I went to inspect it, this appeared:
So this is an Astro toolbar. Most of the “features” in the toolbar are links to Astro documentation and other parts of their site.
I’m really baffled by this.
I already had to find the Astro documentation to get started. Why did Astro think I need more ways to get to the docs? Why would I want to do that from inside my running site?
I’m trying to make a static site with HTML and CSS only. I don’t want any JavaScript anywhere near my site. Get that stuff away from me!
This just looks like an ad — an ad Astro has placed without permission in my own site. It makes it feel like I don’t control my own site. Ick.
The toolbar made me think: Gosh, what other JavaScript is being loaded by this page? Answer: 1.75MB of JS. I was expecting a tiny bit of code to support hot reloading, but that’s huge. If I were writing client-side JavaScript for these pages, that’s 1.75MB of unknown code that can potentially conflict with code I’m writing.
Ironically, the current Why Astro? page specifically says: Zero JS, by default: Less client-side JavaScript to slow your site down.
Yes, the silly toolbar won’t appear in production. Yes, there’s a configuration option that can turn off this silly toolbar in development.
But the damage is done: all this silly toolbar accomplished was to make me deeply suspicious of Astro’s intentions.
Impressions
It took me the better part of two days to port this blog, which felt long. Your mileage may vary.
The things I liked about Astro:
Having Astro give you the confidence to make a site is good
Using the file system for routing is reasonable
Hot module replacement is nice
Astro’s documentation is quite good
Astro’s contributors are clearly committed to quality
Having a large user community is great
The things I didn’t like:
So many things felt unnecessarily complicated
I couldn’t put my static assets where I wanted
I couldn’t use the URL scheme I wanted
I struggled to define my content collection
I struggled to define an RSS feed
Silly toolbar ad thing
My largest issue with Astro and SSGs like it is that I couldn’t easily construct a mental model of how it worked. I was looking for some overall picture that said: “Here’s the step-by-step process of what Astro does when it builds your site…” but could not find that.
That’s a big request! Going through this with Astro made me appreciate the difficulty of going through a similar process with my own project — something I hope to fix.
Is all this complexity necessary?
Although people had told me Astro is simple, I thought it was quite complex for basic sites like blogs.
Stepping back, what work is actually required to statically generate a blog site?
Represent the complete site in a coherent way
Read in a folder of markdown posts with hardcoded and calculated metadata
Convert the markdown to HTML
Apply a template to turn those posts into final pages
Generate feeds like RSS
Handle one-off pages like the About page
Link everything together
Write all the pages out to the file system
Taken individually, none of these tasks is that much work.
The entirety of an SSG might seem daunting, but many programmers would probably feel comfortable doing these individual tasks. And the sum of a small set of doable tasks is a doable task.
To prove that, I want to rewrite this sample blog again, this time in vanilla JavaScript with no dependencies. I predict this will take slightly more code than the Astro version but will be just as functional, more standard, and more comprehensible.
Read the other posts in this series:
Static site generators like Astro are actually pretty complex for the problems they solve [this post]
Taking the original #pondlife blog as a reference for the Astro blog, here were my requirements for the project source code (things that only matter to me as the author):
The blog posts go in a top-level /markdown folder.
Each markdown post has a name containing a date like 2025-07-04.md; this date should be used as the date for the post. Each post has YAML front matter containing a title property. The body of the post is markdown that should be converted to HTML.
The images for the posts go in an /images folder.
The site’s static assets go in /src/assets.
A standard page template is used for all posts to provide consistent headers, footers, etc.
The project output goes in the /build folder.
I couldn’t find a way to meet requirements #3 and #4, but was able to meet the rest of these.
And here were my requirements for the final site (things end users can see):
Posts appear in reverse chronological order.
The site’s /posts area offers direct links to all individual posts, with a URL like /posts/2025-07-04.html.
Posts have links to older/newer posts.
The site’s /pages area offers the posts grouped in sets of 10, e.g., /pages/1.html contains the first 10 posts.
Those grouped pages have links to older/newer pages.
The site’s /index.html home page shows the same content as /pages/1.html.
The blog supports feeds in RSS and JSON Feed formats.
An additional /about.html page offers information about the site using content drawn from a page at /src/about.md.
I had some trouble getting Astro to meet requirements #8 and #10: the server would accept the format I wanted but the build process wouldn’t create pages following that format.
When my kids were young, we did lots of science and engineering things with them to entertain them or just to alleviate boredom. We visited many science museums. We did numerous kits (e.g., Tinker Crate a.k.a KiwiCo), as well as followed published activities like the Marshmallow Challenge or recipes like
homemade ginger ale (which was just okay).
But most of our favorite projects were things we made up.
Make a “bagpipe” out of a recorder, a plastic bag, and a straw. A bagpipe can play a constant note for longer than a human can exhale: the player blows into a bag, then squeezes the bag to force the air through an instrument. You can rig something together that will do the same thing.
Try to figure out how best to keep an ice cube from melting. Do this as little experiments to figure out what helps. E.g., place a control ice cube in a cup and another on a wire mesh suspended over the cup. Which lasts longer? Why? Also: put two ice cubes in cups, then have your kid cover one with their warmest winter jacket. Which will melt first?
Use a protractor and some math to try to measure the height of a tall tree in the neighbor’s yard, then figure out whether it’s close enough to your house to hit it.
Make a wood swing. Our swing still sits in our front yard and is used daily by passing kids.
Create a scale model of the Earth/Moon system. A globe of the Earth has an enormous amount of play and educational value. Somehow we also ended up with a globe of the Moon. (You could also just print pictures out on a paper.) We worked out how far apart they should be at scale, then physically put them that distance apart. It’s further than you think. In our case the straight-line distance just barely fit inside our first floor.
“Potion Lab”: mix household substances together and see what you get. (Our kids generally pursued two lines of research: fragrances and insecticides.) If the substances include baking soda and vinegar, eventually something surprising will happen.
Construct a maze for a hamster.
Create an elevator for stuffed animals.
Create a zipline for stuffed animals.
Weigh all the stuffed animals on a kitchen scale to determine who is the lightest and who is the heaviest.
Gather all the Legos in the house and try to build a freestanding tower that can reach the ceiling. This is a good use for Duplos. Our house has a light well to the second floor, and we were just barely able to make a tower on the first floor that reached up to the level of the second.
Body conductivity. One electronics kit included a tiny speaker alarm powered by AA batteries. Our kids discovered that if their fingers were wet, and they each held a wire leading to the speaker, they could touch each other and the alarm would sound. They found this to be absolutely hilarious.
Get a can of soda or any processed sweet food with an ingredients label. Find out much sugar is in the thing, covert the grams to teaspoons (4 g = 1 tsp), then make a pile of that much sugar. Compare different things to find the thing with the most added sugar, then make that pile of sugar. Consider whether you now want to eat that pile of sugar.
Proprioception challenge: stand a couple of feet away from some cups on the floor. Close your eyes and keep them closed as you try to drop a marble into a cup. (This was part of a longer series of challenges to show that you have way more than five senses.)
Build a big crane that can dangle a cat toy from the second floor to the first floor.
Business/index card “sailboats”. You may be able to do this at a restaurant if the host counter offers business cards and you have a glassy table. Fold a card any way you want, place it on a smooth table, then blow on it. Try to come up with the one that glides the best across the table.
These specific activities were all great fun — but the point is that you can do a lot with what you already have. The main thing is to decide to do something, spot something you can work with, and then announce that it’s time for a project.
Tip: You can make any project more interesting by giving it a distinctive name. You’re not making a crane, you’re making a Sky Crane. You’re not making a cat bed, you’re making The Circle of Comfort. Sometimes the name comes at the beginning; sometimes one of you will say something funny in the middle and you can use that.
The open source WESL (#WebGPU Shading Language) project recently launched a new WESL documentation site. While helping the group create their site, I realized they could pull their markdown content directly from the places they already keep it.
A key question for any documentation project: how and where should the manually-authored content be stored?
The WESL project maintains two collections of markdown content aimed at different audiences:
The WESL wiki has how-to guidance for devs that want to use WESL in WebGPU projects.
The WESL spec is a collection of more formal documents for implementers.
These collections work well just as they already are, so I thought it’d be good to let the documentation site pull content from these sources using git submodules. The spec is a regular git repository, and behind the scenes a GitHub wiki is a git repository too. Submodules introduce complexities and are not for everyone, but here they let the site project access both repos locally as subfolders.
It was easy to write a program in Origami that pulls content from these subfolders, transforms the markdown to HTML, and pours the HTML into a page template.
The Origami site definition added navigation links, including a pop-up navigation menu for small window sizes.
The wiki and spec both link to each other. A small JavaScript function fixes up those links; the Origami program can easily call that for each document.
I was happy to discover that using a standard HTML <details> element with absolute positioning could do the heavy lifting for a reasonable pop-up menu with very little client-side JavaScript.
Adding full-text search to the complete document collection was just a one-liner via the Origami pagefind extension.
Using git submodules means that wiki or spec updates don’t automatically appear on the documentation site; someone has to go into the site project and pull the latest wiki and spec changes. Having a manual step like that might count as an advantage or a disadvantage depending on your situation.
I was really happy with how small the source for this project ended up being. Setting aside the HTML templates, only ~200 lines of Origami and vanilla JavaScript are required to define the entire site and the client-side behavior.
This is tiny. Origami is a general-purpose system for building sites; it’s not specifically a documentation site generator. This small amount of code defines a bespoke documentation system from scratch.
Using a wiki for documentation this way is really interesting! Project contributors can freely edit wiki pages using familiar tools, then have all that content turned into a static documentation site that project users can freely browse.
VS Code is moving towards letting people write VS Code extensions directly in native ES modules but as of this writing it’s still not possible. If you are writing a new VS Code extension in early 2025, here is a way to write your extension nearly entirely in ES modules today.
I haven’t published a version of a VS Code extension that uses this technique yet, but an in-progress branch works locally and I believe this will work in production. I’m sharing this technique before shipping it because it’s clear other people are also actively searching for a solution to this problem.
This strategy leverages Node’s current support for mixing CommonJS and ES modules. You create a small CommonJS wrapper for your extension, then do all your real work in ES modules. Everything can be done in plain JavaScript (no compilation or bundling required).
CommonJS portion
In package.json, set "type": "commonjs". This lets Node treat plain .js file extensions as CommonJS so that VS Code’s own modules can load.
Create an entry point to your VS Code extension with a .cjs file extension: e.g. extension.cjs. (You could potentially use a .js extension but the .cjs will help you and others remember that this is CommonJS.) This file is just a wrapper, and the only place where you write using CommonJS conventions: require and module.exports.
In package.json, set this wrapper as the extension entry point: "main": "./src/extension.cjs"
Create an ES module with an .mjs file extension: extension.mjs. This module is your extension’s real code, and here you’ll use the ES module conventions: import and export.
Have extension.cjs use a dynamic import to load extension.mjs. You can’t use require() for this, because require is synchronous and ES modules are fundamentally asynchronous. Example
The main export of extension.cjs is a tiny VS Code extension that delegates all lifecycle methods like activate to the real code in the ES module.
ES portion
Your extension.mjs code will want to use the vscode package, but that’s not a regular npm package. The VS Code extension host makes that dynamically available but only to CommonJS modules. Work around this by having extension.cjs obtain a vscode reference and pass it to extension.mjs. You could pass it as a function parameter, but to keep things simpler, I just had extension.cjs set a global variable on globalThis so extension.mjs can read that global. I believe each extension runs in its own process; this should be safe enough. [Updated March 18: A GitHub comment explains that, contrary to what I wrote, “VS Code loads extensions into a single extension host process”.]
Inside your extension.mjs module you can freely import additional ES modules in your project as long as they have .mjs file extensions. (The project’s "type": "commonjs" will treat plain .js files as CommonJS.)
Your .mjs modules can import VS Code dependencies like vscode-languageclient. However, since those are CommonJS packages, you can not extract specific package members with the ES syntax import { thing }. Instead, import the entire package as a constant, then destructure the constant to extract the members you want. Example
Your .mjs modules can import dependencies from external ES module projects. Their own "type": "module" declaration will let them use .js file extensions as usual.
If you’re writing a language server, you can use the same technique to define the server. The CommonJS wrapper for the server is simpler because it just needs to load the server’s ES module; that will trigger running the server code. Note that a CommonJS module can’t contain a dynamic import at the top level, so you’ll need to put the import inside an immediately-executed async function. Example
Once this is set up, you can do your real work in ES modules and generally ignore the CommonJS wrapper. When VS Code eventually supports extensions as native ES modules, migration should mostly entail deleting the CommonJS wrapper, setting "type": "module", and renaming the .mjs files to plain .js files.
Great: Generating audio from a screenplay made things go much, much faster!
A little bad: Writing a keyboard macro to drive a programming environment is a bit tedious and finicky. I wanted to use a real programming language.
A little bad: Even with a keyboard macro triggering the action on screen, it’s still cumbersome to set up a screen capture program to record the action into video. It was annoying enough that I was reluctant to go through the process again whenever I needed to re-record video.
Bad: Manually editing together the audio fragments and the video was still time-consuming.
Bad: The video showed a session in Microsoft VS Code, but during the days I was working on the video, Microsoft changed the UI of VS Code! That prevented me from incorporating new video, because I didn’t want the distraction of the UI changing back and forth during the video.
Bad: YouTube doesn’t allow you to replace a video with an updated one at the same URL, so each time I edited my video I had to post it at a new URL. This eroded any theoretical value of likes or comments.
What I really wanted to be able to do was write a screenplay and have both the audio and video completely generated from that. I eventually concluded that it would be easier to mock a user interface (like a terminal or editor) than to drive an actual application.
Motion comics
Meanwhile I was fascinated by two UI ideas:
Researcher Bret Victor places thumbnails next to his videos. You can read the thumbnails like a comic. I wish YouTube did this, although it’s been pointed out that anything along these lines would probably reduce their ad revenue.
Interactive motion comics like Florence explore the space in between print comics and interactive games.
I decided to try to create a system that would take a screenplay as input and then output a motion comic. I loved comics as a kid and still enjoy them today. They can feel fun in a way that a tech video often does not.
One strength of a comic is that, unlike a video, the user controls the pace. It’s only a small act to scroll the page, but it feels engaging like reading, not as passive as watching a video.
Building a motion comic in HTML/CSS
One architectural principle I adopted for this was to render the initial form of the complete comic using just HTML and CSS. This not only serves the small audience that don’t or can’t use JavaScript, but also works with the grain of the web.
This static-first approach meant I could easily build the comic page in Origami itself. The main build process feeds the screenplay to a template that generates panels for each screenplay segment. A given panel might the appearance of a terminal window or show a graphic, for example.
Given the advancing state of CSS, building a page in plain HTML and CSS still requires a lot of knowledge, but things mostly work as expected. A particularly important feature for this project was using CSS scroll-snap to center the current panel on the page.
The scroll-snap feature more or less works as advertised, although I notice some slightly odd behaviors on iOS Safari. iOS Safari also has some deeply annoying behavior related to audio autoplay that make it very difficult even to let users opt into audio. These days iOS is my least favorite browser to work in.
Once I could render the basic comic, I went through and added a bit of JavaScript animation to the panels as a progressive enhancement. For now this animation mostly takes the form of typing, but it’s a start. Just as Grant Sanderson has evolved his system for programmatic math animations, this comic system can evolve in the future.
It was really fun to round out the experience with stock vector illustrations, sound effects, and gorgeous comic lettering fonts from BlamBot. As soon as I dropped in a dialogue font with ALL CAPS, the comic feel snapped into focus.
Building this mostly as plain HTML and CSS has two other important benefits:
Change detection. As with all Origami projects, I can use Origami’s own changes function to test the built files against a previously-generated baseline. That includes checking the text of any comic panels that incorporate the output of Origami expressions. If I make a change to the language itself that inadvertently changes the output shown in the comic, the changes function should flag those for me.
These plain web files can be hosted anywhere. I don’t have any particular beef with YouTube at this time but their market position as a capricious and rapacious monopolist should give us all pause. Without the constraints of YouTube, I can update the comic whenever I want and keep the same URL. And you don’t have to sit through ads!
What I really want to do is direct
I now have the basics of the system I’ve wanted: I can edit a screenplay and have that produce a (hopefully) engaging user experience with dynamic visual and audio components.
This feels more like directing than video production. With a video, I often couldn’t get a sense for how a particular line would feel until the video was finished — but unless I was really unhappy with it, it was inconceivable that I would go back and redo a line.
Being able to focus on the screenplay makes it much easier for me to step back, perceive the comic as a viewer, and spot something that can be improved. Editing the comic is as fast as editing any other text and the result of the edit can be viewed instantly.
How does it feel?
This kind of motion comic sits somewhere on a spectrum between plain text documentation and recorded video tutorials. It wouldn’t take much to move this closer to regular text documentation, or push it further to the other end and render all the animated frames to video.
I’m pretty happy with this as it is, but if you go through the comic and have thoughts, I’d love to hear them.
It’s always useful for me at the end of the year to reflect back on the past year’s work. I think this has been a great year for the Web Origami project and the Origami language.
Goals for 2024
At the start of the year I set some specific goals for the project, all in service of building awareness of the project. These were all in addition to the regular investments in the Origami language, runtime, builtins, etc.
Goal 1: Create a small but realistic sample application every month
I kept this up for six months, producing the set of apps on the Examples page:
Cherokee Myths — generate a table of contents, incorporate full-text search
Japan hike ebook — using ZIP/EPUB tree driver to create an ebook
pondlife — sample blog, made available as origami-blog-start starter project
I’m quite happy with this set, and I think they’ve been helpful in illustrating some of what Origami can do.
Halfway through the year I felt like I’d reached the point of diminishing returns; adding one more to the set isn’t going to be the thing that tips the balance for a newcomer. And going forward actual user sites will also be good examples for others to follow.
Goal 2: Daily efforts to promote Web Origami
Marketing doesn’t come naturally to me, so I tried to make myself spend substantial time doing it. I wanted reach out to at least one person each work day with an email, social media post, blog post, etc.
I was only able to keep this up for a few months before getting exhausted. As it turned out, that might have been enough anyway.
Goal 3: Pitch Web Origami presentations to three conferences
I did this — but none of the conference accepted my talk proposals.
Submitting a conference proposal is real work, and I’ve come to believe that it’s a waste of my time.
As a matter of policy, conference organizers give you zero feedback on your submissions so it’s hard to improve them.
When I looked at talks that were ultimately accepted by these conferences, I was disappointed: many talks promoted technologies that already have a lot of awareness, so the conference just fanned the flames of something that’s already popular.
The world of conferences is supported by payola: companies pay conference organizers a “sponsorship fee” to get a talk accepted. I can’t afford that, and in any event find the practice appalling. I sat through a one-day conference this year that felt like binging 10 hours of informercials.
The one conference I particularly wanted to speak at was the StrangeLoop conference. Sadly, in January 2024 I looked for their CFP and was crushed to learn that 2023 had been the final year of the conference.
I would love to present Origami at a conference at some point but can’t afford to waste more time on talk proposals that will just get rejected. I’ll only invest the time to prepare a talk if invited to do so.
Feature work
I am incredibly fortunate to be able to work on the Origami language full time. I was able to invest in a long list of new or improved features for the language. Most of the investments I made in the second half of the year were based on user feedback.
By far the most exciting news this year was that people began using Origami to make sites with it. At the beginning of the year I was the only one with Origami sites in production; now there are a couple of user sites and a few more are in development.
These early adopters provide invaluable feedback on what kinds of sites real people want to make, and whether Origami makes it easy for them to make those kinds of sites.
I’m looking forward in 2025 to fostering the community of Origami users and directing substantial investments in the project based on their feedback.
I’ve posted a new Origami intro screencast that covers some of the basics of the Origami language:
This screencast doesn’t give a complete introduction yet, but I think the production process I’m using is itself interesting and worth sharing.
Videos are a vital form of documentation but:
Videos take forever to produce — for me, easily an hour or more of production work per minute of final video!
Videos documenting an evolving language quickly become out of date. I’ve poured weeks and weeks into videos that are already woefully out of date and have had to be taken down to avoid confusion.
What I really want is to be able to focus on the story I want to tell — to be a screenwriter. I want to write a screencast script with stage directions (“Click on index.html”) and dialogue (“Alice: This index.html file contains…”). I’d love to be able to generate a decent screencast video directly from that.
Process
I can’t find anything that lets me do what I want, so for now I’m trying to automate generation of the video and audio separately.
I’m trying a scriptable mouse/keyboard desktop automation product called Keyboard Maestro. I can write keyboard/mouse macros for specific common actions (open the VS Code terminal, select a file, etc.), then assemble these into an overall macro for the screencast.
This is very clunky for this purpose; I really wish some product like this offered a real programming language. In any event, I play the macro while recording the screen to get the video portion of the screencast.
The script is a YAML file with lines of dialogue for each of the “actors”:
-echo:>
To illustrate the basic ideas in their plainest form, let's start by writing
some expressions in Origami using the command-line interface called ori. If
I type "ori 1 plus 1", it evaluates that and displays 2.
-echo:>
If I type "ori hello”, oree displays hello. In the shell, you'll need to
escape quotes or surround them with extra quotes because the shell itself
consumes quote marks.
-shimmer:>
In addition to basic numbers and strings, you can reference files. Think of
each file as if it were a spreadsheet cell. Instead of the A1, B2 style cell
references in a spreadsheet, we can use paths and file names to refer to
things. Unlike most programming languages, names in Origami can include
characters like periods and hyphens.
This produces the audio portion of the screencast.
I then use Camtasia to merge the audio and video to create the final screencast file. This requires a fair bit of work to position the audio clips in relation to the video, and in many places to add delays in the video to give the audio time to play.
Lessons
This editing process still takes time but the recording of the audio and video took very little time compared to previous screencasts. And if Origami changes, it’s feasible to tweak the demo macro or dialogue, rerecord those, and splice those into the screencast.
I posted this video on the Origami discussion board to get feedback on a draft screencast. I used that feedback to refine the audio and video and then spliced in the updated parts. Being able to iterate on a screencast is fantastic.
One unpleasant surprise: After a second round of feedback, I went to rerecord the video — and discovered that VS Code had made changes to its window chrome! Uh, rats. That means that new video clips can’t be used alongside old ones, which means having to reposition all the audio in relation to the new video. That’s a real time sink.
Future
I hope this approach pans out so that I can make more useful screencasts that can stay relevant for a longer time. Towards that end, I’d love to be able to:
Replace the use of a keyboard/mouse desktop automation with something that has a real programming language, ideally JavaScript.
Annotate the desktop automation script with the dialogue so that I can somehow programmatically sync the audio and video tracks.
Alternatively, create a way to render something that looks like a generic code editor (and browser) with enough functionality to illustrate the kinds of points I make in screencasts, and where I can drive all the activity through code.
If you’re familiar with tools that can do any of those things, please let me know!
I’d love to find a few new people to try out the Origami programming language for creating websites — maybe you?
Maybe you have any of these goals:
Are thinking of making a site for a passion project but aren’t sure how
Have an existing site you want to move off a platform (WordPress, say) to something you control
Want to try rewriting a site to add more features
Want a site where you understand how it’s made
and you:
Are familiar with basic HTML and CSS (JavaScript knowledge is not required)
Can use a code editor (doesn’t matter which one)
Have some minimal experience running commands in a terminal window
The Origami programming language complements HTML and CSS to let you define the structure of a site. You write formulas or expressions at roughly the level of complexity of spreadsheet formulas. These fully determine the site you get; nothing happens unless you ask for it.
The language is concise and powerful. The above screenshot shows the code for a sample influencer lifestyle blog. Those ~15 lines of code establish the basic site structure, and another 20 lines in other files prepare the raw content and produce the blog feed. That’s all that’s required to define a blog engine completely from scratch. For comparison, a typical blog engine might require much more code in its configuration file alone and be much harder to reason about.
A playtest is generally done as a video call. You outline your goals and then either go through an Origami tutorial or we work together to start something from scratch. At this stage of the language’s evolution, direct observation is extremely helpful. Some people find that prospect intimidating, but this is a playtest! We will test the language — no one will be testing you.
The Origami documentation is complete enough that a motivated person could potentially get up and running on their own, but the onboarding won’t be as easy as with an already well-established, mature language.