When you publish on a platform, you are borrowing an address. Your writing lives under someone else’s name on someone else’s domain, and it stays there only as long as that company allows. You can share the link, but you do not control the address it points to.
Toiling on someone else's domain
Publishing on someone else’s domain is like working on someone else’s plantation. You may do the labour and produce the value, and you may be visible while you are useful. Someone else owns the ground beneath your work and decides what stays or goes, along with what it allows to exist.
Owning a domain changes that relationship in practice. It gives you your own name on the internet and a place where your writing is not conditional on a platform’s rules. Hosting providers and publishing tools can change, but the address that holds your work remains under your control.
That matters because URLs are promises about where something can be found. When you publish a page, you are telling the world that something lives at a particular location. If that location belongs to someone else, the promise belongs to them. If you own the domain, the promise is yours to keep.
That is why self-publishing on your own domain is about freedom as much as technology. A site on an address you control cannot be quietly erased or buried, and another party cannot reshape it. It becomes a place where your writing can exist on its own terms and persist for as long as you decide it should.
I keep returning to the early web’s ideas because they offered a clean technical answer to an ordinary human problem: publishing something and knowing it will still be there tomorrow. You could put a document on a server and give it a URL, and that link would mean something. The page lived on your server, and the link stayed stable because it pointed to a file you controlled.
This is for everyone — London 2012 Olympics.
The web’s original promise rested on open protocols and a light technical burden. HTTP and HTML were open enough for anyone to implement and small enough to understand. If you wanted to publish, you bought a domain and wrote a page. Then you put it online yourself and shared the link.
For a long time, I drifted away from that. Social platforms made publishing effortless by removing hosting work and day-to-day site upkeep. They also bundled identity with distribution and comments while the feed carried the rest. The friction was low and the audience was already there, so the trade felt reasonable even as I gave up control. That convenience has a cost because the platform controls the surface where your work appears and how long it stays visible. Your writing lasts only while it fits inside someone else’s system.
Self-publishing, for me, is a technical choice. Owning a domain and shipping plain HTML means I decide what appears and how long it stays online, with presentation under the same control. A piece of writing stays available while I keep it there, and a page I host will still exist tomorrow if I decide it should.
This way of working also changes how I think about the archive. When everything lives on my own site, I set the rules of what matters and how it surfaces. It is a collection of files and links that I am responsible for. That makes the archive legible and inspectable. It becomes something I can maintain for years without renting it from a platform.
This series is about that choice and treats publishing as infrastructure. I want to write in a space that I control, using tools that do not disappear when a service shuts down. Plain HTML still exists alongside stable URLs and open protocols. I am building with those constraints because they keep the writing mine and keep the system open to anyone who wants to read it or take it elsewhere. This is why I build this system and publish here.
If you missed the earlier piece, start with Why Websites Need Templates. It lays out why templates exist and why the shared frame matters. It also gives the baseline this piece argues against.
Once a template starts making decisions, it stops behaving like a document and begins acting like a control layer. It gets access to collections of articles and loops through them. It filters results and hides sections based on conditions. That is convenient at first and hard to reason about later, because the output depends on hidden logic inside the layout.
Here is the type of template logic I am talking about. It looks like layout, but it carries behaviour.
{% if featured_posts.length > 0 %}
<h2>Featured</h2>
<ul>
{% for post in featured_posts %}
<li><a href="{{ post.url }}">{{ post.title }}</a></li>
{% endfor %}
</ul>
{% endif %}
Posts I want to surface on the home page.
When templates carry that logic, I can no longer read the file and know what the HTML will be. A small edit to a condition can change the page structure, and a metadata tweak can move a heading or drop a list. That is the point where I stop trusting the output.
I avoid that by keeping templates from deciding what content exists. The build pipeline and the query layer take on the responsibility instead. That keeps selection in one place where I can inspect it.
Before the build touches any template, it walks the filesystem and reads frontmatter to construct an index of all available articles. Queries apply to that index to produce explicit, ordered lists of article records. Each query has a name, and that name refers to a specific deterministic result set.
By the time a template enters the pipeline, the interesting work is already complete. It receives the result of a query with a list of article records that already sit in the correct order, whether the list is empty or not. That list arrives as data with no instruction to compute.
That is what lets templates stay simple. It keeps the scope small when I need to debug. It also keeps the source legible when I return to it months later.
A template contains ordinary HTML and one or more <template> tags that act as render slots. Each slot names a query so the build knows what to stamp. At build time, the system takes the results of that query and stamps the corresponding content into the slot. If the query returned nothing, the system uses the fallback HTML inside the <template> tag instead. After stamping, the build removes the <template> tag itself, leaving only normal HTML behind.
Because of this, a template never needs conditionals or loops. It never needs to know how many articles exist or whether a tag is empty, and it does not decide ordering. The build resolves all of that earlier. The template defines where the stamped output should appear and what should show up when there is nothing to stamp.
This turns templating into a mechanical operation. Given the same set of article records and the same queries, the same templates will always produce the same HTML. There is no hidden state and no branching behaviour inside the template files themselves.
It also makes the templates readable in a way that is hard to achieve in more conventional systems. When I open one, I see the page structure as it will exist in the final output. I see the header and navigation, then the main content with its fallback messages for empty sections. There is no embedded logic to mentally execute. The only dynamic pieces are clearly marked as slots.
That is what I mean by calling the system boring. I want the boring sections to stay readable, not clever. The templates stay simple and ignore metadata or sorting rules, while avoiding visibility flags as they provide a fixed shape that content slots into. Boring here means predictable and inspectable over time.
Here is a concrete example that matches the home page most readers expect. It is a normal page with a header and navigation plus a list of recent posts. The only special tag is the slot where the build will stamp the query results:
Everything in that file is real HTML. If I open it in a browser, it renders as a page with an empty state because the <template> tag is inert by design. When the build runs, the system replaces that one tag with the output of the latest-posts query. Three returned articles become three summaries in that space, and the build then removes the <template> tag itself. If the query returns nothing, the fallback paragraph remains and the rest of the page stays the same.
That simplicity keeps the system predictable and inspectable, and it stays durable over time. The complexity lives in the build process and the query definitions, where I can validate it and reason about it directly. The templates remain what they look like: static HTML documents with a few clearly defined places where content will appear. If you can read the template, you can understand the page.
The aim is for a reader to open a template and feel the page is already there, with only the content missing.
When I load a web page in a browser, what arrives is a single document. It might be long and include navigation plus footers and sidebars around a main column of text, but to the browser it is just one block of HTML.
When a site grows beyond a handful of pages, I am no longer writing one document but a whole set. Each page carries its own content, yet the structure repeats. The header and navigation stay consistent across the set. The footer and typography stay consistent, as does the layout.
If I were to copy that shared structure into every file, the site would be hard to maintain. A small change to the navigation or layout would require touching every page. Over time those copies drift, and the site becomes inconsistent. This is the practical problem templates exist to solve for me.
A template lets me write the shared structure once and reuse it. I define a common outer document with <html> and <head>, plus a shared header and navigation. The footer sits in the same frame, and I leave one or more places for page-specific content. Each article or page then supplies its own content, and the build places it into those slots.
In its simplest form, templating is just composition. A page is the result of taking some content and placing it inside a larger HTML frame. The content and the frame are separate files, but the output is a single document.
This separation matters because it lets me work on content and structure independently. I can write or revise an article without touching the site chrome. A change to the site chrome does not require me to rewrite the archive. The template system is what connects those two strands.
Once a site grows beyond a few pages, this quickly becomes essential. Without templates, I either duplicate markup everywhere or invent ad-hoc scripts to assemble pages. Templates are the conventional way to express that assembly.
This baseline matters because the rest of the series argues for a stricter, stamp-like approach to templates.
Where templates get more complicated is in how much responsibility I give them. Many systems ask templates to do more than place content into a frame. They ask templates to decide which pieces of content should appear and in what order, sometimes with conditions. That turns the template into a control layer with decisions baked in.
Whether that is a good idea depends on what I want from my publishing system. That is why I want templates to act as mechanical stamps that receive prepared content. I use the next piece to make that trade-off explicit.
In the next article I look at how that additional responsibility changes what templates are, and why it affects the clarity and predictability of the final HTML. If you want to continue, head to Templates as Mechanical Stamps.
This blog only works if it lets me publish day to day technical work without friction, with AI drafting alongside me and me keeping control of the final voice. That constraint is the reason the rest of the system exists.
The web itself is the anchor for this project. I want documents that read cleanly in source and still make sense without scripts. Links should behave as stable addresses and stay clear of runtime actions. The site should feel like the early web did when a page was a page and a URL meant what it said.
That stance forces decisions about durability and access. Accessibility and internationalisation are core requirements because headings and landmarks have to make sense to assistive tools. Layout choices cannot collapse when language or fonts change, or when the reader never uses a mouse. Performance and cacheability sit in the same layer because pages have to load fast on slow networks and stay responsive on old devices.
Discoverability sits beside all of that and keeps the surface legible to crawlers. Clean URLs and clear titles should show intent without JavaScript. Indexable pages should do the same work. I use unobtrusive JavaScript where it helps navigation, but the baseline page remains intact with classic HTML as the foundation.
The tooling has to match the posture, so I keep the toolchain small and inspectable. I add theming and user settings only when they serve reading, and I leave the rest out.
The repository is the blog and the build system, with the record of how the build changes kept in the same place. A post is a folder with a Markdown file and nearby assets, plus a small amount of metadata that makes indexing possible. The folder tree is the public rhythm of the blog. Dates become paths and months become folders, so each post sits as a leaf in the tree. I can point at the layout and show how the site thinks.
Automation has a narrower role here and stays focused on repetition. I rely on a handful of scripts that are obvious when I read them, because that keeps the system legible to me and to the AI that helps maintain it. The scripts handle repetition and the repo holds the permanent record. When the AI adds a script, it becomes an event in the diary and the change itself becomes content.
Templates are plain HTML and only stamp content into a page. Queries decide what exists, and those queries live beside the content they select. That is the heart of this approach. The data stays visible and the selection stays visible. The render stays visible once the build runs. When I say the blog is the build system, I mean the path from idea to page is traceable inside one repository.
Pick any entry and follow its folder path to trace the decision trail that produced it. If I drop these constraints, the archive loses its promise and the project fails its own test.
I treat indexing as a query problem, not a rendering problem. Every list on the site comes from a named query that selects a set of articles and a sort order. That keeps selection declarative and repeatable, and it keeps logic out of templates.
Queries live in JSON, not inside templates. A template names the query and provides a slot. Then the build fills that slot with either summaries or full article bodies. The template never decides what exists, and it never learns how selection works.
This separation keeps indexing logic small and inspectable. If a list looks wrong, I can read the query and see exactly why those items appear. The build avoids inference and guesswork, which prevents hidden selection and keeps the lists predictable.
The constraint is intentional because queries are plain objects, not a DSL. They cannot grow into a second programming language. That limit is a feature: it prevents hidden logic and keeps the system declarative, not imperative.
This keeps the lists debuggable because every entry maps back to a named filter.
Series and tag pages use the same index but serve different reading modes. A series carries narrative order, so those pages sort by date ascending and read like a sequence. Tag pages group by topic and sort by recency, because readers usually want the latest work first. Both views are built from the same frontmatter table, with the difference coming from the query and sort rules and avoiding extra fields.
Feeds follow the same rule, and the global feed plus tag or series feeds come from named queries that avoid scraped HTML. That keeps discovery aligned with the rest of the site and makes the outputs small and deterministic.
I can change a template without changing the data, or refine a query without touching markup. Indexes remain mechanical outputs of named inputs. The result is no surprise lists and no invisible filters.
Frontmatter defines an article for the build, but the body carries the meaning for readers. It is the section a reader actually meets, so I treat it as the record that must survive every build. Everything below the YAML block exists to be read, and that focus drives the format and the discipline. The body holds the text and keeps links and code beside the media, so the file stays readable before and after the build touches it.
Markdown keeps the text close to plain language. Paragraphs and headings read the same in a terminal or a diff viewer, so the file stays legible wherever it travels. That consistency lets me review drafts without leaving the file. That stability is why I keep the body in Markdown even as the rest of the system shifts. I begin the body with the title and sometimes a byline so the file stands on its own. Dates and tags sit in metadata blocks outside the prose. Sections use normal Markdown headings, which keeps the body free of page and layout syntax.
Layout lives in templates and the body stays focused on narrative structure. I add section breaks to serve the argument, and the template controls how headings appear on screen. Links point to published URLs and stay explicit in the prose, which keeps filesystem paths out of the body. Code blocks use fenced Markdown and avoid inline tooling because the code exists to be read. Syntax highlighting can happen in the browser when it helps.
Images and other assets live beside the article inside an assets/ folder, so each entry carries its own attachments. There is no shared media pool, which keeps the archive legible on disk and makes a clone complete by default. It also keeps attribution and context close to the writing that references the files.
The Markdown references assets by relative path so the entry stays self-contained when the folder moves. The build copies the assets/ directory into the public tree and keeps the same structure in place, which keeps images working in indexes and on their own page. The source stays clean because the paths remain visible and easy to review.

The body avoids embedding logic and leaves lazy loading and presentation to templates, while the Markdown stays presentation-agnostic and media framing stays in the layout layer. That boundary keeps the writing stable, so I can edit a paragraph or add a code block without worrying about the rest of the system. The body remains a record, not a rendering script.
These posts describe the system that publishes them, and they do so in the same format as every other article. That keeps the documentation inside the pipeline, and the body serves as evidence of the approach. When the build changes, the archive stays reliable. If an idea cannot live inside Markdown with links and headings, I treat that as a design problem and fix the system until it can. The body is my promise to readers: it stays readable even if the build disappears.
I treat each article as a single Markdown file inside its own folder because I want the archive to stay inspectable and durable. A reader can open the file and see the full text in a stable shape, which keeps the record legible years later. You can also clone the archive and read it in plain text when the build changes.
Each entry lives in its own folder. The folder path carries the date and slug. The file carries the writing, so URLs stay aligned with edits.
Open article.md in any entry and you always see the same two-section structure. A YAML block sits at the top and the Markdown body follows, which keeps the writing distinct from the data surface. I write the body for readers and use frontmatter so the build can work without reading it.
---
title: "The Shape of the Archive"
summary: "How this blog organises itself on disk instead of inside a database. The path does the work."
series: genesis
tags: [tooling, publishing]
status: published
thumbnail: hero.jpg
---
Here is where the writing begins…
Everything above the divider is frontmatter for indexing, and everything below it carries the published text. The build reads the YAML block to assemble lists for series and tags, then renders the body as the article page. Titles and summaries let index pages render without scraping prose, and the series value places the article inside a narrative. Tags attach the entry to topics and status controls visibility, so lists stay mechanical and templates stay focused on structure.
That pairing is the reason each permalink stays stable and reviewable even when the build changes.
Some values appear in both places because each surface has a job. Titles and summaries belong in lists, while the article uses its own headline. A summary can stay out of the body so indexes stay concise, which keeps index pages tight while the article stays expansive. This separation keeps the body’s voice intact even when I refine index rules.
From a tooling point of view this keeps articles easy to work with. The file reads cleanly in a text editor. The metadata parses cleanly in small scripts. Git diffs stay tight enough to scan in seconds.
For readers this means the permalink remains stable and the record stays open to inspection. For me it means a predictable build and a file I can review years later. That durability is the payoff of treating the file system as the source.
I'm building this in public because writing details down changes how I think. When ideas stay private they remain vague and provisional. Writing for an imagined reader sharpens the ideas and shows the gaps as decisions harden on the page.
This repository is where I'm working decisions out and where those workings live. The notes and specs sit beside the code they shape, with half-decisions and revisions kept close by. They stay inside the process, present while I shape the code.
The writing is one of the tools I'm using to build the system. It affects the choices I make and keeps the work accountable because it has to be readable to someone besides me. One concrete example: when I wrote down the rule that templates should remain pure HTML, it stopped me from adding conditional logic in a rush. That single paragraph forced me to move selection into named queries and keep the rendering mechanical, which is now a fixed constraint in the build.
Speed matters to this experiment because the work happens in real time and the record only helps if I keep it current. That pace has a cost: it pulls me toward familiar phrasing and tidy scaffolding that drains the writing of its edge. I feel the slide when I announce a topic and leave the decision unstated, or when sentences line up into the same rhythm.
The prose-lint script is my counterweight. It catches scene-setting lines that do no work and contrast framing that drifts into narration. It also flags tidy lists that flatten nuance. One example I keep fixing is the empty opener. Draft: This section is about speed and quality in writing. Revision: I write fast to capture decisions while they are fresh. I then slow down to keep the voice intact. After the rewrite, I run another pass and keep the decision in front.
A central concern in the experiment is how AI fits into this. I want AI as a constrained collaborator with clear boundaries about authority and intent, along with what I allow to change. Those constraints preserve the shape of the work while still letting me use speed and advantage when they help. The blog that will appear here comes from the project and follows the same rules, so templates and queries show up alongside build scripts and deployment because the writing needs them. Another example came from tagging, where writing down the normalisation rule forced me to rename a handful of tags and rebuild the indexes so the archive stayed coherent and kept near-duplicates in one place.
I still work out my position, and I take my time with it. This is an attempt to think carefully and design slowly while leaving a readable trail behind. The trail should stand on its own, even if I am the only person who ever reads it end to end.
I built a linting script to flag formulaic phrasing and hedge words that creep into prose during fast writing when assisted by automation. The script runs against drafts by default and stops the build if too many issues accumulate in a single file. Speed matters to this project, yet unexamined speed produces exactly the variety of writing I want to avoid. I need a mirror that catches when prose drifts into explanation mode or starts sounding like a corporate blog post. The linter serves as that mirror by reflecting my own habits back to me during the draft process. Each rule carries a numeric weight where high-severity rules target the common formulaic failures that almost never appear in careful writing. The script flags these immediately to keep the voice sharp and direct.
Medium-weight rules catch passive voice and weak paragraph openers. Lower-weight rules flag hedge words that dilute precision. The script also enforces standard British spelling and watches for first-person plural pronouns. This blog is a personal diary written in first person singular, so the tool flags specific collective pronouns to keep the voice anchored and avoid a detached tone.
One of the heavier checks targets contrast framing by flagging ideas defined by negation followed by correction. AI-generated text produces these constructions constantly, which means they simulate structure without requiring deeper thought. The script scores each contrast construction and caps the penalty per paragraph so repetition doesn't blow out the score. When it catches these constructions, it samples the matches and shows where the framing appeared.
It also checks sentence-length variance and paragraph uniformity. Low variance produces rhythmic monotony that makes prose feel generated, while high use of markers like "however" and "moreover" signals over-structured thinking. Paragraphs that stay uniformly short or uniformly long create a flattened reading experience, so the script flags both extremes. It also counts list blocks because bullet points can replace explanation when overused.
The script supports three severity levels including high and medium along with low priority issues. Each problem receives a severity assignment based on weight and files undergo evaluation against per-file thresholds. The default thresholds allow one high-severity issue along with three medium and six low errors before a file fails the gate. The build fails if any file exceeds its thresholds. During local development, the script prints warnings without blocking so I can keep moving forward while still seeing where the prose needs attention. Strict mode rejects any file with issues, which helps during final review passes. Gate mode balances friction with velocity by allowing some imperfection while still catching the worst constructions before they ship.
This project exists to test a new model of software development and content production in public. The repository is both the lab and the record, and the system I am building will publish the discoveries that come out of its own construction. If you want the map of how the system thinks, treat this series as a lab notebook with working notes and decisions in view. I want readers to see the system forming alongside the result.
The first content is the documentation process itself, so I write while I build and treat the conversation and decisions as raw material with constraints and contradictions kept visible. The Q/A process I am working out here is the real work, and it becomes the content. Each entry shows a decision while it is still forming, before it hardens into tooling.
Over time, this back-and-forth will harden into scripts as the workflow stabilises and the content aligns with the system it describes. The point of doing it in the open is that the system proves itself by publishing its own formation. The diary becomes the tool because each change leaves a record I can test and reuse.
I minimise imports and treat third-party tools as a last resort. When a problem fits in a small script, I write it and keep it in the repo so the implementation stays visible. When the problem is larger than that, I import a library and document the specific gap it fills.
On the publishing side, I am aiming for the most boring convention that still works: a home page index with the newest post at the top. Each post also gets a dedicated article page with a stable permalink. That is the shape of the site I want to live inside. It lets me read the archive as a plain list without extra machinery. It also keeps the build simple enough to inspect.