1
0
Fork 0
mirror of https://git.sr.ht/~seirdy/seirdy.one synced 2024-09-19 20:02:10 +00:00

Reformat markdown with mdfmt

This commit is contained in:
Rohan Kumar 2021-01-24 14:42:40 -08:00
parent 8d43e65750
commit a078d6a9ee
No known key found for this signature in database
GPG key ID: 1E892DB2A5F84479
8 changed files with 215 additions and 595 deletions

View file

@ -1,14 +1,9 @@
seirdy.one
==========
[![sourcehut](https://img.shields.io/badge/repository-sourcehut-lightgrey.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZlcnNpb249IjEuMSINCiAgICB3aWR0aD0iMTI4IiBoZWlnaHQ9IjEyOCI+DQogIDxkZWZzPg0KICAgIDxmaWx0ZXIgaWQ9InNoYWRvdyIgeD0iLTEwJSIgeT0iLTEwJSIgd2lkdGg9IjEyNSUiIGhlaWdodD0iMTI1JSI+DQogICAgICA8ZmVEcm9wU2hhZG93IGR4PSIwIiBkeT0iMCIgc3RkRGV2aWF0aW9uPSIxLjUiDQogICAgICAgIGZsb29kLWNvbG9yPSJibGFjayIgLz4NCiAgICA8L2ZpbHRlcj4NCiAgICA8ZmlsdGVyIGlkPSJ0ZXh0LXNoYWRvdyIgeD0iLTEwJSIgeT0iLTEwJSIgd2lkdGg9IjEyNSUiIGhlaWdodD0iMTI1JSI+DQogICAgICA8ZmVEcm9wU2hhZG93IGR4PSIwIiBkeT0iMCIgc3RkRGV2aWF0aW9uPSIxLjUiDQogICAgICAgIGZsb29kLWNvbG9yPSIjQUFBIiAvPg0KICAgIDwvZmlsdGVyPg0KICA8L2RlZnM+DQogIDxjaXJjbGUgY3g9IjUwJSIgY3k9IjUwJSIgcj0iMzglIiBzdHJva2U9IndoaXRlIiBzdHJva2Utd2lkdGg9IjQlIg0KICAgIGZpbGw9Im5vbmUiIGZpbHRlcj0idXJsKCNzaGFkb3cpIiAvPg0KICA8Y2lyY2xlIGN4PSI1MCUiIGN5PSI1MCUiIHI9IjM4JSIgc3Ryb2tlPSJ3aGl0ZSIgc3Ryb2tlLXdpZHRoPSI0JSINCiAgICBmaWxsPSJub25lIiBmaWx0ZXI9InVybCgjc2hhZG93KSIgLz4NCjwvc3ZnPg0KCg==)](https://sr.ht/~seirdy/seirdy.one)
[![GitLab
mirror](https://img.shields.io/badge/mirror-GitLab-orange.svg?logo=gitlab)](https://gitlab.com/Seirdy/seirdy.one)
[![GitHub
mirror](https://img.shields.io/badge/mirror-GitHub-black.svg?logo=github)](https://github.com/Seirdy/seirdy.one)
[![sourcehut](https://img.shields.io/badge/repository-sourcehut-lightgrey.svg?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZlcnNpb249IjEuMSINCiAgICB3aWR0aD0iMTI4IiBoZWlnaHQ9IjEyOCI+DQogIDxkZWZzPg0KICAgIDxmaWx0ZXIgaWQ9InNoYWRvdyIgeD0iLTEwJSIgeT0iLTEwJSIgd2lkdGg9IjEyNSUiIGhlaWdodD0iMTI1JSI+DQogICAgICA8ZmVEcm9wU2hhZG93IGR4PSIwIiBkeT0iMCIgc3RkRGV2aWF0aW9uPSIxLjUiDQogICAgICAgIGZsb29kLWNvbG9yPSJibGFjayIgLz4NCiAgICA8L2ZpbHRlcj4NCiAgICA8ZmlsdGVyIGlkPSJ0ZXh0LXNoYWRvdyIgeD0iLTEwJSIgeT0iLTEwJSIgd2lkdGg9IjEyNSUiIGhlaWdodD0iMTI1JSI+DQogICAgICA8ZmVEcm9wU2hhZG93IGR4PSIwIiBkeT0iMCIgc3RkRGV2aWF0aW9uPSIxLjUiDQogICAgICAgIGZsb29kLWNvbG9yPSIjQUFBIiAvPg0KICAgIDwvZmlsdGVyPg0KICA8L2RlZnM+DQogIDxjaXJjbGUgY3g9IjUwJSIgY3k9IjUwJSIgcj0iMzglIiBzdHJva2U9IndoaXRlIiBzdHJva2Utd2lkdGg9IjQlIg0KICAgIGZpbGw9Im5vbmUiIGZpbHRlcj0idXJsKCNzaGFkb3cpIiAvPg0KICA8Y2lyY2xlIGN4PSI1MCUiIGN5PSI1MCUiIHI9IjM4JSIgc3Ryb2tlPSJ3aGl0ZSIgc3Ryb2tlLXdpZHRoPSI0JSINCiAgICBmaWxsPSJub25lIiBmaWx0ZXI9InVybCgjc2hhZG93KSIgLz4NCjwvc3ZnPg0KCg==)](https://sr.ht/~seirdy/seirdy.one) [![GitLab mirror](https://img.shields.io/badge/mirror-GitLab-orange.svg?logo=gitlab)](https://gitlab.com/Seirdy/seirdy.one) [![GitHub mirror](https://img.shields.io/badge/mirror-GitHub-black.svg?logo=github)](https://github.com/Seirdy/seirdy.one)
[![builds.sr.ht
status](https://builds.sr.ht/~seirdy/seirdy.one.svg)](https://builds.sr.ht/~seirdy/seirdy.one)
[![builds.sr.ht status](https://builds.sr.ht/~seirdy/seirdy.one.svg)](https://builds.sr.ht/~seirdy/seirdy.one)
Code for my personal website, [seirdy.one](https://seirdy.one). Built with Hugo.
@ -42,7 +37,7 @@ To test in CI, after deploying to the staging environment:
- webhint CLI
- [lighthouse-ci](https://github.com/GoogleChrome/lighthouse-ci)
CI also runs [static-webmention](https://github.com/nekr0z/static-webmentions) to
gather a list of WebMentions for me to send and review manually.
CI also runs [static-webmention](https://github.com/nekr0z/static-webmentions) to gather a list of WebMentions for me to send and review manually.
See the `Makefile` for details. The CI saves lighthouse reports as a build artifact.

View file

@ -1,26 +1,20 @@
---
outputs:
- html
- gemtext
- html
- gemtext
title: Seirdy's Home
---
<div class="h-card">
Seirdy
======
It me! This is the personal website for
<span class="h-card p-author"><a href="https://seirdy.one" rel="author home" class="u-url url">{{% indieweb-icon %}}<span class="p-name"><span class="p-given-name">Rohan</span> <span class="p-family-name">Kumar</span></span></a>, a.k.a. <span class="p-nickname">Seirdy</span> (online handle).</span>
It me! This is the personal website for <span class="h-card p-author"><a href="https://seirdy.one" rel="author home" class="u-url url">{{% indieweb-icon %}}<span class="p-name"><span class="p-given-name">Rohan</span> <span class="p-family-name">Kumar</span></span></a>, a.k.a. <span class="p-nickname">Seirdy</span> (online handle).</span>
Other versions of this website
------------------------------
In addition to its <a class="u-url" href="https://seirdy.one" rel="me">canonical
URL</a>, a "rough draft" of this website also exists on my
<a class="u-url" href="https://envs.net/~seirdy" rel="me">Tildeverse page</a>. This
site's content also appears on my
<a class="u-url" href="gemini://seirdy.one" rel="me">Gemini space</a>.
In addition to its <a class="u-url" href="https://seirdy.one" rel="me">canonical URL</a>, a "rough draft" of this website also exists on my <a class="u-url" href="https://envs.net/~seirdy" rel="me">Tildeverse page</a>. This site's content also appears on my <a class="u-url" href="gemini://seirdy.one" rel="me">Gemini space</a>.
About me
--------
@ -32,20 +26,14 @@ I'm a FOSS enthusiast, software minimalist, and a CS+Math undergrad who likes
watching anime and tweaking his Linux setup.
</p>
Git repos: <a href="https://sr.ht/~seirdy" rel="me">Sourcehut</a>,
<a href="https://github.com/Seirdy" rel="me">GitHub</a>, and
<a href="https://gitlab.com/Seirdy" rel="me">GitLab</a>
Git repos: <a href="https://sr.ht/~seirdy" rel="me">Sourcehut</a>, <a href="https://github.com/Seirdy" rel="me">GitHub</a>, and <a href="https://gitlab.com/Seirdy" rel="me">GitLab</a>
Contact
-------
Contact me via <a class="u-email" href="mailto:seirdy@seirdy.one" rel="me">email</a>
(<a rel="pgpkey authn" class="u-key" href="./publickey.asc">PGP</a>), or on the Fediverse
via <a class="u-url" href="https://pleroma.envs.net/seirdy" rel="me">my Pleroma
account</a>.
Contact me via <a class="u-email" href="mailto:seirdy@seirdy.one" rel="me">email</a> (<a rel="pgpkey authn" class="u-key" href="./publickey.asc">PGP</a>), or on the Fediverse via <a class="u-url" href="https://pleroma.envs.net/seirdy" rel="me">my Pleroma account</a>.
Chat with me: I prefer IRC, where my nick is usually Seirdy. Alternatively, I'm
<a class="u-url" href="https://matrix.to/#/@seirdy:envs.net" rel="me">@seirdy:envs.net</a>
on Matrix.
Chat with me: I prefer IRC, where my nick is usually Seirdy. Alternatively, I'm <a class="u-url" href="https://matrix.to/#/@seirdy:envs.net" rel="me">@seirdy:envs.net</a> on Matrix.
</div>

View file

@ -3,7 +3,6 @@ date: 2020-10-30
layout: post
title: Seirdy (Rohan Kumar)
---
Rohan Kumar : He/Him : Age 20
Online Handle: Seirdy
@ -11,15 +10,7 @@ Online Handle: Seirdy
Other versions of this website
------------------------------
This page also exists on the [tildeverse](https://tildeverse.org/), a bunch of \*nix
computers that let people sign up for shell accounts. A typical shell account
features clients for IRC and email, common terminal/commandline utilities, and (most
importantly) web hosting. Read about the tildeverse's
[origins](https://web.archive.org/web/20180917091804/https://medium.com/message/tilde-club-i-had-a-couple-drinks-and-woke-up-with-1-000-nerds-a8904f0a2ebf),
read [the FAQ](http://tilde.club/wiki/faq.html), pick [a
tilde](http://tilde.club/%7Epfhawkins/othertildes.html) and [get
started](http://tilde.club/~anthonydpaul/primer.html). My Tildeverse pages will serve
as a "rough draft".
This page also exists on the [tildeverse](https://tildeverse.org/), a bunch of \*nix computers that let people sign up for shell accounts. A typical shell account features clients for IRC and email, common terminal/commandline utilities, and (most importantly) web hosting. Read about the tildeverse's [origins](https://web.archive.org/web/20180917091804/https://medium.com/message/tilde-club-i-had-a-couple-drinks-and-woke-up-with-1-000-nerds-a8904f0a2ebf), read [the FAQ](http://tilde.club/wiki/faq.html), pick [a tilde](http://tilde.club/%7Epfhawkins/othertildes.html) and [get started](http://tilde.club/~anthonydpaul/primer.html). My Tildeverse pages will serve as a "rough draft".
Content on this site also appears on [my Gemini space](gemini://seirdy.one)
@ -36,27 +27,15 @@ Location (Seirdy, online)
My handle is "Seirdy" on all the platforms I use.
- The [Tildeverse](https://envs.net/~seirdy).
- Software forges: [Sourcehut](https://sr.ht/~seirdy),
[GitHub](https://github.com/Seirdy), [GitLab](https://gitlab.com/Seirdy), and
[Codeberg](https://codeberg.org/Seirdy).
- Social (federated): I'm [@Seirdy
@pleroma.envs.net](https://pleroma.envs.net/seirdy) on the Fediverse.
- Social (non-federated): I'm Seirdy on [Hacker
News](https://news.ycombinator.com/user?id=Seirdy),
[Lobsters](https://lobste.rs/u/Seirdy),
[Reddit](https://www.reddit.com/user/Seirdy/),
[Tildes.net](https://tildes.net/user/Seirdy), and Linux Weekly News.
- Email: my address is <seirdy@seirdy.one>. I typically sign my emails with my public
PGP key: [36B154A782AEA0AC](./publickey.asc). My key is also available via WKD.
- Chat: my nick is Seirdy on most IRC networks. If you don't like IRC, I'm also
[@seirdy:envs.net](https://matrix.to/\#/@seirdy:envs.net) on Matrix. I generally
prefer IRC to Matrix, and don't check Matrix very often.
- Software forges: [Sourcehut](https://sr.ht/~seirdy), [GitHub](https://github.com/Seirdy), [GitLab](https://gitlab.com/Seirdy), and [Codeberg](https://codeberg.org/Seirdy).
- Social (federated): I'm [@Seirdy @pleroma.envs.net](https://pleroma.envs.net/seirdy) on the Fediverse.
- Social (non-federated): I'm Seirdy on [Hacker News](https://news.ycombinator.com/user?id=Seirdy), [Lobsters](https://lobste.rs/u/Seirdy), [Reddit](https://www.reddit.com/user/Seirdy/), [Tildes.net](https://tildes.net/user/Seirdy), and Linux Weekly News.
- Email: my address is <seirdy@seirdy.one>. I typically sign my emails with my public PGP key: [36B154A782AEA0AC](./publickey.asc). My key is also available via WKD.
- Chat: my nick is Seirdy on most IRC networks. If you don't like IRC, I'm also [@seirdy:envs.net](https://matrix.to/\#/@seirdy:envs.net) on Matrix. I generally prefer IRC to Matrix, and don't check Matrix very often.
If you find a "Seirdy" somewhere else and don't know whether or not it's me, please
contact me and ask instead of assuming that it must be me.
If you find a "Seirdy" somewhere else and don't know whether or not it's me, please contact me and ask instead of assuming that it must be me.
My preferred forge for personal projects is Sourcehut, but my repositories have
remotes for GitHub and GitLab too.
My preferred forge for personal projects is Sourcehut, but my repositories have remotes for GitHub and GitLab too.
Interests, preferences, et cetera
---------------------------------
@ -65,19 +44,13 @@ I ought to come up with more interests than these, but that sounds hard.
### Free software
The word "free" in "free software" refers to freedom, not price. I recommend looking
it up if you aren't familiar.
The word "free" in "free software" refers to freedom, not price. I recommend looking it up if you aren't familiar.
I lean towards simplicity; I usually prefer CLIs that follow the UNIX philosophy over
TUIs, and TUIs over GUIs. If a piece of software is complex enough to require a
funding round, I would rather avoid it. There are exceptions, of course: I use a
Linux distro with Systemd (Fedora), after all.
I lean towards simplicity; I usually prefer CLIs that follow the UNIX philosophy over TUIs, and TUIs over GUIs. If a piece of software is complex enough to require a funding round, I would rather avoid it. There are exceptions, of course: I use a Linux distro with Systemd (Fedora), after all.
Some software I use: Fedora, Alpine Linux, SwayWM, Pandoc, mpv, mpd, Minetest,
Neovim, tmux, newsboat, WeeChat, and zsh.
Some software I use: Fedora, Alpine Linux, SwayWM, Pandoc, mpv, mpd, Minetest, Neovim, tmux, newsboat, WeeChat, and zsh.
More information is available in my [dotfiles repo](https://sr.ht/~seirdy/dotfiles);
check its README.
More information is available in my [dotfiles repo](https://sr.ht/~seirdy/dotfiles); check its README.
### Anime
@ -94,13 +67,7 @@ I watch anime. Some of my favorites, in no particular order:
### Music
I've put together a periodically-updated [list of tracks](/music.txt) that I've rated
8/10 or higher in my mpd stickers database, auto-generated by some of my
[mpd-scripts](https://git.sr.ht/~seirdy/mpd-scripts/tree/master/smart-playlists). I'm
a fan of glitch, trailer music, and symphonic and power metal; I've also recently
been getting into Japanese rock thanks to a few anime openings. Some of my favorite
artists are The Glitch Mob, Pretty Lights, Beats Antique, Hammerfall, Badflower,
Celldweller/Scandroid, Helloween, Two Steps from Hell, Nightwish, Mili, and MYTH &
ROID.
I've put together a periodically-updated [list of tracks](/music.txt) that I've rated 8/10 or higher in my mpd stickers database, auto-generated by some of my [mpd-scripts](https://git.sr.ht/~seirdy/mpd-scripts/tree/master/smart-playlists). I'm a fan of glitch, trailer music, and symphonic and power metal; I've also recently been getting into Japanese rock thanks to a few anime openings. Some of my favorite artists are The Glitch Mob, Pretty Lights, Beats Antique, Hammerfall, Badflower, Celldweller/Scandroid, Helloween, Two Steps from Hell, Nightwish, Mili, and MYTH & ROID.
<!--vi:ft=markdown.pandoc.gfm-->

View file

@ -1,70 +1,49 @@
---
date: "2020-11-17T13:13:03-08:00"
description: A seires on setting up resilient git-based project workflows, free of
vendor lock-in.
vendor lock-in.
outputs:
- html
- gemtext
- html
- gemtext
tags:
- git
- foss
- git
- foss
title: "Resilient Git, Part 0: Introduction"
---
Recently, GitHub re-instated the [youtube-dl git repository](https://github.com/ytdl-org/youtube-dl) after following a takedown request by the RIAA under the DMCA. Shortly after the takedown, many members of the community showed great interest in "decentralizing git" and setting up a more resilient forge. What many of these people fail to understand is that the Git-based project setup is designed to support decentralization by being fully distributed.
Recently, GitHub re-instated the [youtube-dl git
repository](https://github.com/ytdl-org/youtube-dl) after following a takedown
request by the RIAA under the DMCA. Shortly after the takedown, many members of the
community showed great interest in "decentralizing git" and setting up a more
resilient forge. What many of these people fail to understand is that the Git-based
project setup is designed to support decentralization by being fully distributed.
Following the drama, I'm putting together a multi-part guide on how to leverage the decentralized, distributed nature of git and its ecosystem. I made every effort to include all parts of a typical project.
Following the drama, I'm putting together a multi-part guide on how to leverage the
decentralized, distributed nature of git and its ecosystem. I made every effort to
include all parts of a typical project.
I'll update this post as I add articles to the series. At the moment, I've planned to write the following articles:
I'll update this post as I add articles to the series. At the moment, I've planned to
write the following articles:
1. [Hydra Hosting](../../../2020/11/18/git-workflow-1.html): repository hosting.
2. Community feedback (issues, support, etc.)
3. Community contributions (patches)
4. CI/CD
5. Distribution
1. [Hydra Hosting](../../../2020/11/18/git-workflow-1.html): repository hosting.
2. Community feedback (issues, support, etc.)
3. Community contributions (patches)
4. CI/CD
5. Distribution
The result of the workflows this series covers will be minimal dependence on outside parties; all members of the community will easily be able to get a copy of the software, its code, development history, issues, and patches offline on their machines with implementation-neutral open standards. Following open standards is the killer feature: nothing in this workflow depends on a specific platform (GitHub, GitLab, Gitea, Bitbucket, Docker, Nix, Jenkins, etc.), almost eliminating your project's "bus factor".
The result of the workflows this series covers will be minimal dependence on outside
parties; all members of the community will easily be able to get a copy of the
software, its code, development history, issues, and patches offline on their
machines with implementation-neutral open standards. Following open standards is the
killer feature: nothing in this workflow depends on a specific platform (GitHub,
GitLab, Gitea, Bitbucket, Docker, Nix, Jenkins, etc.), almost eliminating your
project's "bus factor".
Providing a way to get everything offline, in a format that won't go obsolete if a
project dies, is the key to a resilient git workflow.
Providing a way to get everything offline, in a format that won't go obsolete if a project dies, is the key to a resilient git workflow.
Before we start: FAQ
--------------------
Q: What level of experience does this series expect?
A: Basic knowledge of git, and a very basic understanding of email. If you have a few
repos and can use a third-party email client, you're good.
A: Basic knowledge of git, and a very basic understanding of email. If you have a few repos and can use a third-party email client, you're good.
Q: Do you think you can get hundreds and thousands of people to drop Git{Hub,Lab,tea}
and use a different workflow?
Q: Do you think you can get hundreds and thousands of people to drop Git{Hub,Lab,tea} and use a different workflow?
A: No, but that would be nice. If only five people who read this series give this
workflow a spin and two of them like it and keep using it, I'd be happy.
A: No, but that would be nice. If only five people who read this series give this workflow a spin and two of them like it and keep using it, I'd be happy.
Q: Is this workflow more difficult than my current one?
A: "Difficult" is subjective. I recommend TRYING this before jumping to conclusions
(or worse, sharing those conclusions with others before they have a chance to try
it).
A: "Difficult" is subjective. I recommend TRYING this before jumping to conclusions (or worse, sharing those conclusions with others before they have a chance to try it).
Q: I'm not interested in trying anything new, no matter what the benefits are.
A: First of all, that wasn't a question. Second, this series isn't for you. You
should not read this. I recommend doing literally anything else.
A: First of all, that wasn't a question. Second, this series isn't for you. You should not read this. I recommend doing literally anything else.
Next: Resilient Git, Part 1: [Hydra Hosting](../../../2020/11/18/git-workflow-1.html)

View file

@ -2,42 +2,29 @@
date: "2020-11-18T18:31:15-08:00"
description: Efficient redundancy via repository mirroring with nothing but git.
outputs:
- html
- gemtext
- html
- gemtext
tags:
- git
- foss
- git
- foss
title: "Resilient Git, Part 1: Hydra Hosting"
---
This is Part 1 of a series called [Resilient Git](../../../2020/11/17/git-workflow-0.html).
This is Part 1 of a series called [Resilient
Git](../../../2020/11/17/git-workflow-0.html).
The most important part of a project is its code. Resilient projects should have their code in multiple places of equal weight so that work continues normally if a single remote goes down.
The most important part of a project is its code. Resilient projects should have
their code in multiple places of equal weight so that work continues normally if a
single remote goes down.
Many projects already do something similar: they have one "primary" remote and several mirrors. I'm suggesting something different. Treating a remote as a "mirror" implies that the remote is a second-class citizen. Mirrors are often out of date and aren't usually the preferred place to fetch code. Instead of setting up a primary remote and mirrors, I propose **hydra hosting:** setting up multiple primary remotes of equal status and pushing to/fetching from them in parallel.
Many projects already do something similar: they have one "primary" remote and
several mirrors. I'm suggesting something different. Treating a remote as a "mirror"
implies that the remote is a second-class citizen. Mirrors are often out of date and
aren't usually the preferred place to fetch code. Instead of setting up a primary
remote and mirrors, I propose **hydra hosting:** setting up multiple primary remotes
of equal status and pushing to/fetching from them in parallel.
Having multiple primary remotes of equal status might sound like a bad idea. If there are multiple remotes, how do people know which one to use? Where do they file bug reports, get code, or send patches? Do maintainers need to check multiple places?
Having multiple primary remotes of equal status might sound like a bad idea. If there
are multiple remotes, how do people know which one to use? Where do they file bug
reports, get code, or send patches? Do maintainers need to check multiple places?
No. Of course not. A good distributed system should automatically keep its nodes in
sync to avoid the hassle of checking multiple places for updates.
No. Of course not. A good distributed system should automatically keep its nodes in sync to avoid the hassle of checking multiple places for updates.
Adding remotes
--------------
This process should pretty straightforward. You can run `git remote add` (see
`git-remote(1)`) or edit your repo's `.git/config` directly:
This process should pretty straightforward. You can run `git remote add` (see `git-remote(1)`) or edit your repo's `.git/config` directly:
``` gitconfig
```gitconfig
[remote "origin"]
url = git@git.sr.ht:~seirdy/seirdy.one
fetch = +refs/heads/*:refs/remotes/origin/*
@ -49,41 +36,32 @@ This process should pretty straightforward. You can run `git remote add` (see
fetch = +refs/heads/*:refs/remotes/gh_upstream/*
```
If that's too much work--a perfectly understandable complaint--automating the process
is trivial. Here's [an example from my
dotfiles](https://git.sr.ht/~seirdy/dotfiles/tree/master/Executables/shell-scripts/bin/git-remote-setup).
If that's too much work--a perfectly understandable complaint--automating the process is trivial. Here's [an example from my dotfiles](https://git.sr.ht/~seirdy/dotfiles/tree/master/Executables/shell-scripts/bin/git-remote-setup).
Seamless pushing and pulling
----------------------------
Having multiple remotes is fine, but pushing to and fetching from all of them can be
slow. Two simple git aliases fix that:
Having multiple remotes is fine, but pushing to and fetching from all of them can be slow. Two simple git aliases fix that:
``` gitconfig
```gitconfig
[alias]
pushall = !git remote | grep -E 'origin|upstream' | xargs -L1 -P 0 git push --all --follow-tags
fetchall = !git remote | grep -E 'origin|upstream' | xargs -L1 -P 0 git fetch
```
Now, `git pushall` and `git fetchall` will push to and fetch from all remotes in
parallel, respectively. Only one remote needs to be online for project members to
keep working.
Now, `git pushall` and `git fetchall` will push to and fetch from all remotes in parallel, respectively. Only one remote needs to be online for project members to keep working.
Advertising remotes
-------------------
I'd recommend advertising at least three remotes in your README: your personal
favorite and two determined by popularity. Tell users to run `git remote set-url` to
switch remote locations if one goes down.
I'd recommend advertising at least three remotes in your README: your personal favorite and two determined by popularity. Tell users to run `git remote set-url` to switch remote locations if one goes down.
Before you ask...
-----------------
Q: Why not use a cloud service to automate mirroring?
A: Such a setup depends upon the cloud service and a primary repo for that service to
watch, defeating the purpose (resiliency). Hydra hosting automates this without
introducing new tools, dependencies, or closed platforms to the mix.
A: Such a setup depends upon the cloud service and a primary repo for that service to watch, defeating the purpose (resiliency). Hydra hosting automates this without introducing new tools, dependencies, or closed platforms to the mix.
Q: What about issues, patches, etc.?
@ -91,9 +69,5 @@ A: Stay tuned for Parts 2 and 3, coming soon to a weblog/gemlog near you™.
Q: Why did you call this "hydra hosting"?
A: It's a reference to the Hydra of Lerna from Greek Mythology, famous for keeping
its brain in a nested RAID array to protect against disk failures and beheading. It
could also be a reference to a fictional organization of the same name from Marvel
Comics named after the Greek monster for [similar
reasons](https://www.youtube.com/watch?v=assccoyvntI&t=37) ([direct
webm](https://seirdy.one/misc/hail_hydra.webm)).
A: It's a reference to the Hydra of Lerna from Greek Mythology, famous for keeping its brain in a nested RAID array to protect against disk failures and beheading. It could also be a reference to a fictional organization of the same name from Marvel Comics named after the Greek monster for [similar reasons](https://www.youtube.com/watch?v=assccoyvntI&t=37) ([direct webm](https://seirdy.one/misc/hail_hydra.webm)).

View file

@ -1,14 +1,12 @@
---
date: "2020-11-02T11:06:56-08:00"
description: Seirdy's obligatory inagural blog post, which is barely longer than this
description.
description.
outputs:
- html
- gemtext
- html
- gemtext
title: Hello World
---
Hello, world! This is my first post.
It looks like I managed to somehow get this site working. You can look forward to a
few more posts in the coming weeks.
It looks like I managed to somehow get this site working. You can look forward to a few more posts in the coming weeks.

View file

@ -1,179 +1,104 @@
---
date: "2021-01-12T00:03:10-08:00"
description: Using thermal physics, cosmology, and computer science to calculate
password vulnerability to the biggest possible brute-force attack.
description: Using thermal physics, cosmology, and computer science to calculate password
vulnerability to the biggest possible brute-force attack.
outputs:
- html
- gemtext
- html
- gemtext
tags:
- security
- fun
- security
- fun
title: Becoming physically immune to brute-force attacks
footnote_heading: References and endnotes
---
This is a tale of the intersection between thermal physics, cosmology, and a tiny
amount of computer science to answer a seemingly innocuous question: "How strong does
a password need to be for it to be physically impossible to brute-force, ever?"
This is a tale of the intersection between thermal physics, cosmology, and a tiny amount of computer science to answer a seemingly innocuous question: "How strong does a password need to be for it to be physically impossible to brute-force, ever?"
[TLDR]({{<ref "#conclusiontldr" >}}) at the bottom.
*Note: this post contains equations. Since none of the equations were long or
complex, I decided to just write them out in code blocks instead of using images or
MathML the way Wikipedia does.*
_Note: this post contains equations. Since none of the equations were long or complex, I decided to just write them out in code blocks instead of using images or MathML the way Wikipedia does._
Introduction
------------
I realize that advice on password strength can get outdated. As supercomputers grow
more powerful, password strength recommendations need to be updated to resist
stronger brute-force attacks. Passwords that are strong today might be weak in the
future. **How long should a password be in order for it to be physically impossible
to brute-force, ever?**
I realize that advice on password strength can get outdated. As supercomputers grow more powerful, password strength recommendations need to be updated to resist stronger brute-force attacks. Passwords that are strong today might be weak in the future. **How long should a password be in order for it to be physically impossible to brute-force, ever?**
This question might not be especially practical, but it's fun to analyze and offers
interesting perspective regarding sane upper-limits on password strength.
This question might not be especially practical, but it's fun to analyze and offers interesting perspective regarding sane upper-limits on password strength.
Asking the right question
-------------------------
Let's limit the scope of this article to passwords used in encryption/decryption. An
attacker is trying to guess a password to decrypt something.
Let's limit the scope of this article to passwords used in encryption/decryption. An attacker is trying to guess a password to decrypt something.
Instead of predicting what tomorrow's computers may be able to do, let's examine the
*biggest possible brute-force attack* that the laws of physics can allow.
Instead of predicting what tomorrow's computers may be able to do, let's examine the _biggest possible brute-force attack_ that the laws of physics can allow.
A supercomputer is probably faster than your phone; however, given enough time, both
are capable of doing the same calculations. If time isn't the bottleneck, energy
usage is. More efficient computers can flip more bits with a finite amount of energy.
A supercomputer is probably faster than your phone; however, given enough time, both are capable of doing the same calculations. If time isn't the bottleneck, energy usage is. More efficient computers can flip more bits with a finite amount of energy.
In other words, energy efficiency and energy availability are the two fundamental
bottlenecks of computing. What happens when a computer with the highest theoretical
energy efficiency is limited only by the mass-energy of *the entire [observable
universe](https://en.wikipedia.org/wiki/Observable_universe)?*
In other words, energy efficiency and energy availability are the two fundamental bottlenecks of computing. What happens when a computer with the highest theoretical energy efficiency is limited only by the mass-energy of _the entire [observable universe](https://en.wikipedia.org/wiki/Observable_universe)?_
Let's call this absolute unit of an energy-efficient computer the MOAC (Mother of All
Computers). For all classical computers that are made of matter, do work to compute,
and are bound by the conservation of energy, the MOAC represents a finite yet
unreachable limit of computational power. And yes, it can play Solitaire with
*amazing* framerates.
Let's call this absolute unit of an energy-efficient computer the MOAC (Mother of All Computers). For all classical computers that are made of matter, do work to compute, and are bound by the conservation of energy, the MOAC represents a finite yet unreachable limit of computational power. And yes, it can play Solitaire with _amazing_ framerates.
How strong should your password be for it to be safe from a brute-force attack by the
MOAC?
How strong should your password be for it to be safe from a brute-force attack by the MOAC?
### Quantifying password strength.
*A previous version of this section wasn't clear and accurate. I've since removed the
offending bits and added a clarification about salting/hashing to the [Caveats and
estimates]({{<ref "#caveats-and-estimates" >}}) section.*
_A previous version of this section wasn't clear and accurate. I've since removed the offending bits and added a clarification about salting/hashing to the [Caveats and estimates]({{<ref "#caveats-and-estimates" >}}) section._
A good measure of password strength is **entropy bits.** The entropy bits in a
password is a base-2 logarithm of the number of guesses required to brute-force
it.[^1]
A good measure of password strength is **entropy bits.** The entropy bits in a password is a base-2 logarithm of the number of guesses required to brute-force it.[^1]
A brute-force attack that executes 2<sup>n</sup> guesses is certain to crack a
password with *n* entropy bits, and has a one-in-two chance of cracking a password
with *n*+1 entropy bits.
A brute-force attack that executes 2<sup>n</sup> guesses is certain to crack a password with _n_ entropy bits, and has a one-in-two chance of cracking a password with _n_+1 entropy bits.
For scale, [AES-256](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard)
encryption is currently the industry standard for strong symmetric encryption, and
uses key lengths of 256-bits. An exhaustive key search over a 256-bit key space would
be up against its 2<sup>256</sup> possible permutations. When using AES-256
encryption with a key derived from a password with more than 256 entropy bits, the
entropy of the AES key is the bottleneck; an attacker would fare better by doing an
exhaustive key search for the AES key than a brute-force attack for the password.
For scale, [AES-256](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) encryption is currently the industry standard for strong symmetric encryption, and uses key lengths of 256-bits. An exhaustive key search over a 256-bit key space would be up against its 2<sup>256</sup> possible permutations. When using AES-256 encryption with a key derived from a password with more than 256 entropy bits, the entropy of the AES key is the bottleneck; an attacker would fare better by doing an exhaustive key search for the AES key than a brute-force attack for the password.
To calculate the entropy of a password, I recommend using a tool such as
[zxcvbn](https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/wheeler)
or [KeePassXC](https://keepassxc.org/).
To calculate the entropy of a password, I recommend using a tool such as [zxcvbn](https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/wheeler) or [KeePassXC](https://keepassxc.org/).
The Problem
-----------
Define a function `P`. `P` determines the probability that MOAC will correctly guess
a password with `n` bits of entropy after using `e` energy:
Define a function `P`. `P` determines the probability that MOAC will correctly guess a password with `n` bits of entropy after using `e` energy:
P(n, e)
If `P(n, e) ≥ 1`, the MOAC will certainly guess your password before running out of
energy. The lower `P(n, e)` is, the less likely it is for the MOAC to guess your
password.
If `P(n, e) ≥ 1`, the MOAC will certainly guess your password before running out of energy. The lower `P(n, e)` is, the less likely it is for the MOAC to guess your password.
Caveats and estimates
---------------------
I don't have a strong physics background.
A brute-force attack will just guess a single password until the right one is found.
Brute-force attacks won't "decrypt" stored passwords, because they're not supposed to
be stored encrypted; they're typically
[salted](https://en.wikipedia.org/wiki/Salt_(cryptography)) and hashed.
A brute-force attack will just guess a single password until the right one is found. Brute-force attacks won't "decrypt" stored passwords, because they're not supposed to be stored encrypted; they're typically [salted](https://en.wikipedia.org/wiki/Salt_(cryptography)) and hashed.
When estimating, we'll prefer higher estimates that increase the odds of it guessing
a password; after all, the point of this exercise is to establish an *upper* limit on
password strength. We'll also simplify: for instance, the MOAC will not waste any
heat, and the only way it can guess a password is through brute-forcing. Focusing on
too many details would defeat the point of this thought experiment.
When estimating, we'll prefer higher estimates that increase the odds of it guessing a password; after all, the point of this exercise is to establish an _upper_ limit on password strength. We'll also simplify: for instance, the MOAC will not waste any heat, and the only way it can guess a password is through brute-forcing. Focusing on too many details would defeat the point of this thought experiment.
Quantum computers can use [Grover's
algorithm](https://en.wikipedia.org/wiki/Grover%27s_algorithm) for an exponential
speed-up; to account for quantum computers using Grover's algorithm, calculate
`P(n/2, e)` instead.
Quantum computers can use [Grover's algorithm](https://en.wikipedia.org/wiki/Grover%27s_algorithm) for an exponential speed-up; to account for quantum computers using Grover's algorithm, calculate `P(n/2, e)` instead.
Others are better equipped to explain encryption/hashing/key-derivation algorithms,
so I won't; this is just a pure and simple brute-force attack given precomputed
password entropy, assuming that the cryptography is bulletproof.
Others are better equipped to explain encryption/hashing/key-derivation algorithms, so I won't; this is just a pure and simple brute-force attack given precomputed password entropy, assuming that the cryptography is bulletproof.
Obviously, I'm not taking into account future mathematical advances; my crystal ball
broke after I asked it if humanity would ever develop the technology to make anime
real.
Obviously, I'm not taking into account future mathematical advances; my crystal ball broke after I asked it if humanity would ever develop the technology to make anime real.
Finally, there's always a non-zero probability of a brute-force attack guessing a
password with a given entropy. Literal "immunity" is impossible. Lowering this
probability to statistical insignificance renders our password practically immune to
brute-force attacks.
Finally, there's always a non-zero probability of a brute-force attack guessing a password with a given entropy. Literal "immunity" is impossible. Lowering this probability to statistical insignificance renders our password practically immune to brute-force attacks.
Computation
-----------
How much energy does MOAC use per guess during a brute-force attack? In the context
of this thought experiment, this number should be unrealistically low. I settled on
[`kT`](https://en.wikipedia.org/wiki/KT_(energy)). `k` represents the [Boltzmann
Constant](https://en.wikipedia.org/wiki/Boltzmann_constant) (about
1.381×10<sup>-23</sup> J/K) and `T` represents the temperature of the system. Their
product corresponds to the amount of heat required to create a 1 nat increase in a
system's entropy.
How much energy does MOAC use per guess during a brute-force attack? In the context of this thought experiment, this number should be unrealistically low. I settled on [`kT`](https://en.wikipedia.org/wiki/KT_(energy)). `k` represents the [Boltzmann Constant](https://en.wikipedia.org/wiki/Boltzmann_constant) (about 1.381×10<sup>-23</sup> J/K) and `T` represents the temperature of the system. Their product corresponds to the amount of heat required to create a 1 nat increase in a system's entropy.
A more involved approach to picking a good value might utilize the [Plank-Einstein
relation](https://en.wikipedia.org/wiki/Planck%E2%80%93Einstein_relation).
A more involved approach to picking a good value might utilize the [Plank-Einstein relation](https://en.wikipedia.org/wiki/Planck%E2%80%93Einstein_relation).
It's also probably a better idea to make this value an estimate for flipping a single
bit, and to estimate the average number of bit-flips it takes to make a single
password guess. If that bothers you, pick a number `b` you believe to be a good
estimate for a bit-flip-count and calculate `P(n+b, e)` instead of `P(n, e)`.
It's also probably a better idea to make this value an estimate for flipping a single bit, and to estimate the average number of bit-flips it takes to make a single password guess. If that bothers you, pick a number `b` you believe to be a good estimate for a bit-flip-count and calculate `P(n+b, e)` instead of `P(n, e)`.
What's the temperature of the system? Three pieces of information help us find out:
1. The MOAC is located somewhere in the observable universe
2. The MOAC will be consuming the entire observable universe
3. The universe is mostly empty
1. The MOAC is located somewhere in the observable universe
2. The MOAC will be consuming the entire observable universe
3. The universe is mostly empty
A good value for `T` would be the average temperature of the entire observable
universe. The universe is mostly empty; `T` is around the temperature of cosmic
background radiation in space. The lowest reasonable estimate for this temperature is
2.7 degrees Kelvin.[^2] A lower temperature means less energy usage, less energy
usage allows more computations, and more computations raises the upper limit on
password strength.
A good value for `T` would be the average temperature of the entire observable universe. The universe is mostly empty; `T` is around the temperature of cosmic background radiation in space. The lowest reasonable estimate for this temperature is 2.7 degrees Kelvin.[^2] A lower temperature means less energy usage, less energy usage allows more computations, and more computations raises the upper limit on password strength.
Every guess, the MOAC expends `kT` energy. Let `E` = the total amount of energy the
MOAC can use; let `B` = the maximum number of guesses the MOAC can execute before
running out of energy.
Every guess, the MOAC expends `kT` energy. Let `E` = the total amount of energy the MOAC can use; let `B` = the maximum number of guesses the MOAC can execute before running out of energy.
B = E/(kT)
Now, given the maximum number of passwords the MOAC can guess `B` and the bits of
entropy in our password `n`, we have an equation for the probability that the MOAC
will guess our password:
Now, given the maximum number of passwords the MOAC can guess `B` and the bits of entropy in our password `n`, we have an equation for the probability that the MOAC will guess our password:
P(n,B) = B/2ⁿ
@ -183,46 +108,33 @@ Plug in our expression for `B`:
### Calculating the mass-energy of the observable universe
The MOAC can use the entire mass-energy of the observable universe.[^3] Simply stuff
the observable universe into the attached 100% efficient furnace, turn on the burner,
and generate power for the computer. You might need to ask a friend for help.
The MOAC can use the entire mass-energy of the observable universe.[^3] Simply stuff the observable universe into the attached 100% efficient furnace, turn on the burner, and generate power for the computer. You might need to ask a friend for help.
Just how much energy is that? The mass-energy equivalence formula is quite simple:
E = mc²
We're trying to find `E` and we know `c`, the speed of light, is 299,792,458 m/s.
That leaves `m`. What's the mass of the observable universe?
We're trying to find `E` and we know `c`, the speed of light, is 299,792,458 m/s. That leaves `m`. What's the mass of the observable universe?
### Calculating the critical density of the observable universe
Critical density is smallest average density of matter required to *almost* slow the
expansion of the universe to a stop. Any more dense, and expansion will stop; any
less, and expansion will never stop.
Critical density is smallest average density of matter required to _almost_ slow the expansion of the universe to a stop. Any more dense, and expansion will stop; any less, and expansion will never stop.
Let `D` = critical density of the observable universe and `V` = volume of the
observable universe. Mass is the product of density and volume:
Let `D` = critical density of the observable universe and `V` = volume of the observable universe. Mass is the product of density and volume:
m = DV
We can derive the value of D by solving for it in the [Friedman
equations](https://en.wikipedia.org/wiki/Friedmann_equations):
We can derive the value of D by solving for it in the [Friedman equations](https://en.wikipedia.org/wiki/Friedmann_equations):
D = 3Hₒ²/(8πG)
Where `G` is the [Gravitational
Constant](https://en.wikipedia.org/wiki/Gravitational_constant) and `Hₒ` is the
[Hubble Constant](https://en.wikipedia.org/wiki/Hubble%27s_law). `Hₒd` is the rate of
expansion at proper distance `d`.
Where `G` is the [Gravitational Constant](https://en.wikipedia.org/wiki/Gravitational_constant) and `Hₒ` is the [Hubble Constant](https://en.wikipedia.org/wiki/Hubble%27s_law). `Hₒd` is the rate of expansion at proper distance `d`.
Let's assume the observable universe is a sphere, expanding at the speed of light
ever since the Big Bang.[^4] The volume `V` of our spherical universe when given its
radius `r` is:
Let's assume the observable universe is a sphere, expanding at the speed of light ever since the Big Bang.[^4] The volume `V` of our spherical universe when given its radius `r` is:
V = (4/3)πr³
To find the radius of the observable universe `r`, we can use the age of the universe
`t`:
To find the radius of the observable universe `r`, we can use the age of the universe `t`:
r = ct
@ -230,19 +142,17 @@ Hubble's Law estimates the age of the universe to be around `1/Hₒ`
### Solving for E
Let's plug in all the derived values into our original equation for the mass of the
observable universe `m`:
Let's plug in all the derived values into our original equation for the mass of the observable universe `m`:
m = DV
Remember when I opened the article by saying that none of the equations would be long
or complex?
Remember when I opened the article by saying that none of the equations would be long or complex?
I lied.
m = (3Hₒ²/(8πG))(4/3)π(ct)³
m = c³/(2GHₒ)
E = mc²
E = c⁵/(2GHₒ)
@ -268,26 +178,20 @@ Plugging those in and simplifying:
Here are some sample outputs:
- P(256) ≈ 1.9×10<sup>15</sup> (password certainly cracked after burning 1.9
quadrillionth of the mass-energy of the observable universe).
- P(256) ≈ 1.9×10<sup>15</sup> (password certainly cracked after burning 1.9 quadrillionth of the mass-energy of the observable universe).
- P(306.76) ≈ 1 (password certainly cracked after burning the mass-energy of the
observable universe)
- P(306.76) ≈ 1 (password certainly cracked after burning the mass-energy of the observable universe)
- P(310) ≈ 0.11 (about one in ten)
- P(326.6) ≈ 1.1×10<sup>-6</sup> (about one in a million)
If your threat model is a bit smaller, simulate putting a smaller object into the
MOAC's furnace. For example, the Earth has a mass of 5.972×10²⁴ kg; this gives the
MOAC a one-in-ten-trillion chance of cracking a password with 256 entropy bits and a
100% chance of cracking a 213-bit password.
If your threat model is a bit smaller, simulate putting a smaller object into the MOAC's furnace. For example, the Earth has a mass of 5.972×10²⁴ kg; this gives the MOAC a one-in-ten-trillion chance of cracking a password with 256 entropy bits and a 100% chance of cracking a 213-bit password.
Sample unbreakable passwords
----------------------------
According to KeePassXC's password generator, each of the following passwords has an
entropy between 330 and 340 bits.
According to KeePassXC's password generator, each of the following passwords has an entropy between 330 and 340 bits.
Using the extended-ASCII character set:
@ -310,87 +214,49 @@ Don't use actual excerpts from pre-existing works as your password.
Conclusion/TLDR
---------------
Question: How much entropy should a password have to ensure it will *never* be
vulnerable to a brute-force attack? Can an impossibly efficient computer--the
MOAC--crack your password?
Question: How much entropy should a password have to ensure it will _never_ be vulnerable to a brute-force attack? Can an impossibly efficient computer--the MOAC--crack your password?
Answer: limited only by energy, if a computer with the highest level of efficiency
physically possible is made of matter, does work to compute, and obeys the
conservation of energy:
Answer: limited only by energy, if a computer with the highest level of efficiency physically possible is made of matter, does work to compute, and obeys the conservation of energy:
- A password with 256 bits of entropy is practically immune to brute-force attacks
large enough to quite literally burn the world, but is quite trivial to crack with
a universe-scale fuel source.
- A password with 327 bits of entropy is nearly impossible to crack even if you burn
the whole observable universe trying to do so.
- A password with 256 bits of entropy is practically immune to brute-force attacks large enough to quite literally burn the world, but is quite trivial to crack with a universe-scale fuel source.
- A password with 327 bits of entropy is nearly impossible to crack even if you burn the whole observable universe trying to do so.
At that point, a formidable threat would rather use [other
means](https://xkcd.com/538/) to unlock your secrets.
At that point, a formidable threat would rather use [other means](https://xkcd.com/538/) to unlock your secrets.
Further reading: alternative approaches
---------------------------------------
Check out Scott Aaronson's article, [Cosmology and
Complexity](https://www.scottaaronson.com/democritus/lec20.html). He uses an
alternative approach to finding the maximum bits we can work with: he simply inverts
the [cosmological constant](https://en.wikipedia.org/wiki/Cosmological_constant).
Check out Scott Aaronson's article, [Cosmology and Complexity](https://www.scottaaronson.com/democritus/lec20.html). He uses an alternative approach to finding the maximum bits we can work with: he simply inverts the [cosmological constant](https://en.wikipedia.org/wiki/Cosmological_constant).
This model takes into account more than just the mass of the observable universe.
While we previously found that the MOAC can brute-force a password with 306.76
entropy bits, this model allows the same for up to 405.3 bits.
This model takes into account more than just the mass of the observable universe. While we previously found that the MOAC can brute-force a password with 306.76 entropy bits, this model allows the same for up to 405.3 bits.
### Approaches that account for computation speed
This article's approach deliberately disregards computation speed, focusing only on
energy required to finish a set of computations. Other approaches account for
physical limits on computation speed.
This article's approach deliberately disregards computation speed, focusing only on energy required to finish a set of computations. Other approaches account for physical limits on computation speed.
One well-known approach to calculating physical limits of computation is
[Bremermann's limit](https://en.wikipedia.org/wiki/Bremermann%27s_limit), which
calculates the speed of computation given a finite amount of mass. This article's
approach disregards time, focusing only on mass-energy equivalence.
One well-known approach to calculating physical limits of computation is [Bremermann's limit](https://en.wikipedia.org/wiki/Bremermann%27s_limit), which calculates the speed of computation given a finite amount of mass. This article's approach disregards time, focusing only on mass-energy equivalence.
[A publication](https://arxiv.org/abs/quant-ph/9908043)[^5] by Seth Lloyd from MIT
further explores limits to computation speed on an ideal 1-kilogram computer.
[A publication](https://arxiv.org/abs/quant-ph/9908043)[^5] by Seth Lloyd from MIT further explores limits to computation speed on an ideal 1-kilogram computer.
Acknowledgements
----------------
Thanks to [Barna Zsombor](http://bzsombor.web.elte.hu/) and [Ryan
Coyler](https://rcolyer.net/) for helping me over IRC with my shaky physics and
pointing out the caveats of my approach. u/RisenSteam on Reddit also corrected an
incorrect reference to AES-256 encryption by bringing up salts.
Thanks to [Barna Zsombor](http://bzsombor.web.elte.hu/) and [Ryan Coyler](https://rcolyer.net/) for helping me over IRC with my shaky physics and pointing out the caveats of my approach. u/RisenSteam on Reddit also corrected an incorrect reference to AES-256 encryption by bringing up salts.
My notes from Thermal Physics weren't enough to write this; various Wikipedia
articles were also quite helpful, most of which were linked in the body of the
article.
My notes from Thermal Physics weren't enough to write this; various Wikipedia articles were also quite helpful, most of which were linked in the body of the article.
While I was struggling to come up with a good expression for the minimum energy used
per password guess, I stumbled upon a [blog
post](https://www.schneier.com/blog/archives/2009/09/the_doghouse_cr.html) by Bruce
Schneier. It contained a useful excerpt from his book *Applied Cryptography*[^6]
involving setting the minimum energy per computation to `kT`. I chose a more
conservative estimate for `T` than Schneier did, and a *much* greater source of
energy.
While I was struggling to come up with a good expression for the minimum energy used per password guess, I stumbled upon a [blog post](https://www.schneier.com/blog/archives/2009/09/the_doghouse_cr.html) by Bruce Schneier. It contained a useful excerpt from his book _Applied Cryptography_[^6] involving setting the minimum energy per computation to `kT`. I chose a more conservative estimate for `T` than Schneier did, and a _much_ greater source of energy.
[^1]: James Massey (1994). "Guessing and entropy" (PDF). Proceedings of 1994 IEEE
International Symposium on Information Theory. IEEE. p. 204.
[^2]: Assis, A. K. T.; Neves, M. C. D. (3 July 1995). "History of the 2.7 K
Temperature Prior to Penzias and Wilson"
[^1]: James Massey (1994). "Guessing and entropy" (PDF). Proceedings of 1994 IEEE International Symposium on Information Theory. IEEE. p. 204.
[^3]: The MOAC 2 was supposed to be able to consume other sources of energy such as
dark matter and dark energy. Unfortunately, Intergalactic Business Machines ran out
of funds since all their previous funds, being made of matter, were consumed by the
original MOAC.
[^2]: Assis, A. K. T.; Neves, M. C. D. (3 July 1995). "History of the 2.7 K Temperature Prior to Penzias and Wilson"
[^4]: This is a massive oversimplification; there isn't a single answer to the
question "What is the volume of the observable universe?" Using this speed-of-light
approach is one of multiple valid perspectives. The absolute size of the observable
universe is much greater due to the way expansion works, but stuffing that into the
MOAC's furnace would require moving mass faster than the speed of light.
[^3]: The MOAC 2 was supposed to be able to consume other sources of energy such as dark matter and dark energy. Unfortunately, Intergalactic Business Machines ran out of funds since all their previous funds, being made of matter, were consumed by the original MOAC.
[^5]: Lloyd, S., "Ultimate Physical Limits to Computation," Nature 406.6799,
1047-1054, 2000.
[^4]: This is a massive oversimplification; there isn't a single answer to the question "What is the volume of the observable universe?" Using this speed-of-light approach is one of multiple valid perspectives. The absolute size of the observable universe is much greater due to the way expansion works, but stuffing that into the MOAC's furnace would require moving mass faster than the speed of light.
[^5]: Lloyd, S., "Ultimate Physical Limits to Computation," Nature 406.6799, 1047-1054, 2000.
[^6]: Schneier, Bruce. Applied Cryptography, Second Edition, John Wiley & Sons, 1996.

View file

@ -1,176 +1,99 @@
---
date: "2020-11-23T12:21:35-08:00"
description: A lengthy guide to making small sites that focus on content rather than
form.
form.
outputs:
- html
- gemtext
- html
- gemtext
tags:
- web
- rant
- minimalism
- web
- rant
- minimalism
title: An opinionated list of best practices for textual websites
---
_The following applies to minimal websites that focus primarily on text. It does not apply to websites that have a lot of non-textual content. It also does not apply to websites that focus more on generating revenue or pleasing investors than being good websites._
*The following applies to minimal websites that focus primarily on text. It does not
apply to websites that have a lot of non-textual content. It also does not apply to
websites that focus more on generating revenue or pleasing investors than being good
websites.*
This is a "living document" that I add to as I receive feedback. See the [changelog](https://git.sr.ht/~seirdy/seirdy.one/log/master/item/content/posts/website-best-practices.md).
This is a "living document" that I add to as I receive feedback. See the
[changelog](https://git.sr.ht/~seirdy/seirdy.one/log/master/item/content/posts/website-best-practices.md).
I realize not everybody's going to ditch the Web and switch to Gemini or Gopher today (that'll take, like, a month at the longest). Until that happens, here's a non-exhaustive, highly-opinionated list of best practices for websites that focus primarily on text:
I realize not everybody's going to ditch the Web and switch to Gemini or Gopher today
(that'll take, like, a month at the longest). Until that happens, here's a
non-exhaustive, highly-opinionated list of best practices for websites that focus
primarily on text:
- Final page weight under 50kb without images, and under 200kb with images. Page
weight should usually be much smaller; these are upper-bounds for exceptional
cases.
- Final page weight under 50kb without images, and under 200kb with images. Page weight should usually be much smaller; these are upper-bounds for exceptional cases.
- Works in Lynx, w3m, links (both graphics and text mode), Netsurf, and Dillo
- Works with popular article-extractors (e.g. Readability) and HTML-to-Markdown
converters. This is a good way to verify that your site uses simple HTML and works
with most non-browser article readers (e.g. ebook converters, PDF exports).
- Works with popular article-extractors (e.g. Readability) and HTML-to-Markdown converters. This is a good way to verify that your site uses simple HTML and works with most non-browser article readers (e.g. ebook converters, PDF exports).
- No scripts or interactivity (preferably enforced at the CSP level)
- No cookies
- No animations
- No fonts--local or remote--besides `sans-serif` and `monospace`. More on this
below.
- No fonts--local or remote--besides `sans-serif` and `monospace`. More on this below.
- No referrers
- No requests after the page finishes loading
- No 3rd-party resources (preferably enforced at the CSP level)
- No lazy loading (more on this below)
- No custom colors OR explicitly set the both foreground and background colors. More
on this below.
- No custom colors OR explicitly set the both foreground and background colors. More on this below.
- A maximum line length for readability
- Server configured to support compression (gzip, optionally zstd as well). It's a
free speed boost.
- Supports dark mode via a CSS media feature and/or works with most "dark mode"
browser addons. More on this below.
- A good score on Mozilla's [HTTP Observatory](https://observatory.mozilla.org/). A
bare minimum would be 50, but it shouldn't be too hard to hit 100.
- Server configured to support compression (gzip, optionally zstd as well). It's a free speed boost.
- Supports dark mode via a CSS media feature and/or works with most "dark mode" browser addons. More on this below.
- A good score on Mozilla's [HTTP Observatory](https://observatory.mozilla.org/). A bare minimum would be 50, but it shouldn't be too hard to hit 100.
- Optimized images. More on image optimization below.
- All images labeled with alt-text. The page should make sense without images.
- Maybe HTTP/2. There are some cases in which HTTP/2 can make things slower. Run some
tests to find out.
- Maybe HTTP/2. There are some cases in which HTTP/2 can make things slower. Run some tests to find out.
I'd like to re-iterate yet another time that this only applies to websites that
primarily focus on text. If graphics, interactivity, etc. are an important part of
your website, less (possibly none) of this article applies.
I'd like to re-iterate yet another time that this only applies to websites that primarily focus on text. If graphics, interactivity, etc. are an important part of your website, less (possibly none) of this article applies.
Earlier revisions of this post generated some responses I thought I should address
below. Special thanks to the IRC and [Lobsters](https://lobste.rs/s/akcw1m) users who
gave good feedback!
Earlier revisions of this post generated some responses I thought I should address below. Special thanks to the IRC and [Lobsters](https://lobste.rs/s/akcw1m) users who gave good feedback!
About fonts
-----------
If you really want, you could use serif instead of sans-serif; however, serif fonts
tend to look worse on low-res monitors. Not every screen's DPI has three digits.
If you really want, you could use serif instead of sans-serif; however, serif fonts tend to look worse on low-res monitors. Not every screen's DPI has three digits.
To ship custom fonts is to assert that branding is more important than user choice.
That might very well be a reasonable thing to do; branding isn't evil! It isn't
*usually* the case for textual websites, though. Beyond basic layout and optionally
supporting dark mode, authors generally shouldn't dictate the presentation of their
websites; that should be the job of the user agent. Most websites are not important
enough to look completely different from the rest of the user's system.
To ship custom fonts is to assert that branding is more important than user choice. That might very well be a reasonable thing to do; branding isn't evil! It isn't _usually_ the case for textual websites, though. Beyond basic layout and optionally supporting dark mode, authors generally shouldn't dictate the presentation of their websites; that should be the job of the user agent. Most websites are not important enough to look completely different from the rest of the user's system.
A personal example: I set my preferred fonts in my computer's fontconfig settings.
Now every website that uses sans-serif will have my preferred font. Sites with
sans-serif blend into the users' systems instead of sticking out.
A personal example: I set my preferred fonts in my computer's fontconfig settings. Now every website that uses sans-serif will have my preferred font. Sites with sans-serif blend into the users' systems instead of sticking out.
### But most users don't change their fonts...
The "users don't know better and need us to make decisions for them" mindset isn't
without merits; however, in my opinion, it's overused. Using system fonts doesn't
make your website harder to use, but it does make it smaller and stick out less to
the subset of users who care enough about fonts to change them. This argument isn't
about making software easier for non-technical users; it's about branding by
asserting a personal preference.
The "users don't know better and need us to make decisions for them" mindset isn't without merits; however, in my opinion, it's overused. Using system fonts doesn't make your website harder to use, but it does make it smaller and stick out less to the subset of users who care enough about fonts to change them. This argument isn't about making software easier for non-technical users; it's about branding by asserting a personal preference.
### Can't users globally override stylesheets instead?
It's not a good idea to require users to automatically override website stylesheets.
Doing so would break websites that use fonts such as Font Awesome to display vector
icons. We shouldn't have these users constantly battle with websites the same way
that many adblocking/script-blocking users (myself included) already do when there's
a better option.
It's not a good idea to require users to automatically override website stylesheets. Doing so would break websites that use fonts such as Font Awesome to display vector icons. We shouldn't have these users constantly battle with websites the same way that many adblocking/script-blocking users (myself included) already do when there's a better option.
That being said, many users *do* actually override stylesheets. We shouldn't
*require* them to do so, but we should keep our pages from breaking in case they do.
Pages following this article's advice will probably work perfectly well in these
cases without any extra effort.
That being said, many users _do_ actually override stylesheets. We shouldn't _require_ them to do so, but we should keep our pages from breaking in case they do. Pages following this article's advice will probably work perfectly well in these cases without any extra effort.
### But wouldn't that allow a website to fingerprint with fonts?
I don't know much about fingerprinting, except that you can't do font enumeration
without JavaScript. Since text-based websites that follow these best-practices don't
send requests after the page loads and have no scripts, fingerprinting via font
enumeration is a non-issue on those sites.
I don't know much about fingerprinting, except that you can't do font enumeration without JavaScript. Since text-based websites that follow these best-practices don't send requests after the page loads and have no scripts, fingerprinting via font enumeration is a non-issue on those sites.
Other websites can still fingerprint via font enumeration using JavaScript. They
don't need to stop at seeing what sans-serif maps to; they can see all the available
fonts on a user's system, the user's canvas fingerprint, window dimensions, etc. Some
of these can be mitigated with Firefox's `privacy.resistFingerprinting` setting, but
that setting also understandably overrides user font preferences.
Other websites can still fingerprint via font enumeration using JavaScript. They don't need to stop at seeing what sans-serif maps to; they can see all the available fonts on a user's system, the user's canvas fingerprint, window dimensions, etc. Some of these can be mitigated with Firefox's `privacy.resistFingerprinting` setting, but that setting also understandably overrides user font preferences.
Ultimately, surveillance self-defense on the web is an arms race full of trade-offs.
If you want both privacy and customizability, the web is not the place to look; try
Gemini or Gopher instead.
Ultimately, surveillance self-defense on the web is an arms race full of trade-offs. If you want both privacy and customizability, the web is not the place to look; try Gemini or Gopher instead.
About lazy loading
------------------
For users on slow connections, lazy loading is often frustrating. I think I can speak
for some of these users: mobile data near my home has a number of "dead zones" with
abysmal download speeds, and my home's Wi-Fi repeater setup occasionally results in
packet loss rates above 60% (!!).
For users on slow connections, lazy loading is often frustrating. I think I can speak for some of these users: mobile data near my home has a number of "dead zones" with abysmal download speeds, and my home's Wi-Fi repeater setup occasionally results in packet loss rates above 60% (!!).
Users on poor connections have better things to do than idly wait for pages to load.
They might open multiple links in background tabs to wait for them all to load at
once, or switch to another window/app and come back when loading finishes. They might
also open links while on a good connection before switching to a poor connection. For
example, I often open 10-20 links on Wi-Fi before going out for a walk in a
mobile-data dead-zone. A Reddit user reading an earlier version of this article
described a [similar
experience](https://i.reddit.com/r/web_design/comments/k0dmpj/an_opinionated_list_of_best_practices_for_textual/gdmxy4u/)
riding the train.
Users on poor connections have better things to do than idly wait for pages to load. They might open multiple links in background tabs to wait for them all to load at once, or switch to another window/app and come back when loading finishes. They might also open links while on a good connection before switching to a poor connection. For example, I often open 10-20 links on Wi-Fi before going out for a walk in a mobile-data dead-zone. A Reddit user reading an earlier version of this article described a [similar experience](https://i.reddit.com/r/web_design/comments/k0dmpj/an_opinionated_list_of_best_practices_for_textual/gdmxy4u/) riding the train.
Unfortunately, pages with lazy loading don't finish loading off-screen images in the
background. To load this content ahead of time, users need to switch to the loading
page and slowly scroll to the bottom to ensure that all the important content appears
on-screen and starts loading. Website owners shouldn't expect users to have to jump
through these ridiculous hoops.
Unfortunately, pages with lazy loading don't finish loading off-screen images in the background. To load this content ahead of time, users need to switch to the loading page and slowly scroll to the bottom to ensure that all the important content appears on-screen and starts loading. Website owners shouldn't expect users to have to jump through these ridiculous hoops.
### Wouldn't this be solved by combining lazy loading with pre-loading/pre-fetching?
A large number of users with poor connections also have capped data, and would prefer
that pages don't decide to predictively load many pages ahead-of-time for them. Some
go so far as to disable this behavior to avoid data overages. Savvy privacy-conscious
users also generally disable pre-loading since linked content may employ dark
patterns like tracking without consent.
A large number of users with poor connections also have capped data, and would prefer that pages don't decide to predictively load many pages ahead-of-time for them. Some go so far as to disable this behavior to avoid data overages. Savvy privacy-conscious users also generally disable pre-loading since linked content may employ dark patterns like tracking without consent.
Users who click a link *choose* to load a full page. Loading pages that a user hasn't
clicked on is making a choice for that user.
Users who click a link _choose_ to load a full page. Loading pages that a user hasn't clicked on is making a choice for that user.
### Can't users on poor connections disable images?
I have two responses:
1. If an image isn't essential, you shouldn't include it inline.
2. Yes, users could disable images. That's *their* choice. If your page uses lazy
loading, you've effectively (and probably unintentionally) made that choice for a
large number of users.
1. If an image isn't essential, you shouldn't include it inline.
2. Yes, users could disable images. That's _their_ choice. If your page uses lazy loading, you've effectively (and probably unintentionally) made that choice for a large number of users.
About custom colors
-------------------
Some users' browsers set default page colors that aren't black-on-white. For
instance, Linux users who enable GTK style overrides might default to having white
text on a dark background. Websites that explicitly set foreground colors but leave
the default background color (or vice-versa) end up being difficult to read. Here's
an example:
Some users' browsers set default page colors that aren't black-on-white. For instance, Linux users who enable GTK style overrides might default to having white text on a dark background. Websites that explicitly set foreground colors but leave the default background color (or vice-versa) end up being difficult to read. Here's an example:
<a href="https://seirdy.one/misc/website_colors_large.png">
<picture>
@ -179,10 +102,7 @@ an example:
</picture>
</a>
If you do explicitly set colors, please also include a dark theme using a media
query: `@media (prefers-color-scheme: dark)`. For more info, read the relevant docs
[on
MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-color-scheme)
If you do explicitly set colors, please also include a dark theme using a media query: `@media (prefers-color-scheme: dark)`. For more info, read the relevant docs [on MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-color-scheme)
Image optimization
------------------
@ -194,52 +114,31 @@ Some image optimization tools I use:
- [jpegoptim](https://github.com/tjko/jpegoptim) (lossless or lossy)
- [cwebp](https://developers.google.com/speed/webp/docs/cwebp) (lossless or lossy)
I put together a [quick
script](https://git.sr.ht/~seirdy/dotfiles/tree/3b722a843f3945a1bdf98672e09786f0213ec6f6/Executables/shell-scripts/bin/optimize-image)
to losslessly optimize images using these programs in my dotfile repo.
I put together a [quick script](https://git.sr.ht/~seirdy/dotfiles/tree/3b722a843f3945a1bdf98672e09786f0213ec6f6/Executables/shell-scripts/bin/optimize-image) to losslessly optimize images using these programs in my dotfile repo.
You also might want to use the HTML `<picture>` element, using JPEG/PNG as a fallback
for more efficient formats such as WebP or AVIF. More info in the [MDN
docs](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/picture)
You also might want to use the HTML `<picture>` element, using JPEG/PNG as a fallback for more efficient formats such as WebP or AVIF. More info in the [MDN docs](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/picture)
Most of my images will probably be screenshots that start as PNGs. My typical flow:
1. Lossy compression with `pngquant`
2. Losslessly optimize the result with `oxipng` and its Zopfli backend (slow)
3. Also create a lossless WebP from the lossy PNG, using `cwebp`
4. Include the resulting WebP in the page, with a fallback to the PNG using a
`<picture>` element.
1. Lossy compression with `pngquant`
2. Losslessly optimize the result with `oxipng` and its Zopfli backend (slow)
3. Also create a lossless WebP from the lossy PNG, using `cwebp`
4. Include the resulting WebP in the page, with a fallback to the PNG using a `<picture>` element.
It might seem odd to create a lossless WebP from a lossy PNG, but I've found that
it's the best way to get the smallest possible image at the minimum acceptable
quality for screenshots with solid backgrounds.
It might seem odd to create a lossless WebP from a lossy PNG, but I've found that it's the best way to get the smallest possible image at the minimum acceptable quality for screenshots with solid backgrounds.
In general, avoid using inline images just for decoration. Only use an image if it
significantly adds to your content, and provide alt-text as a fallback.
In general, avoid using inline images just for decoration. Only use an image if it significantly adds to your content, and provide alt-text as a fallback.
If you want to include a profile photo (e.g., if your website is part of the
IndieWeb), I recommend re-using one of your favicons. Since most browsers will fetch
your favicons anyway, re-using them should be relatively harmless.
If you want to include a profile photo (e.g., if your website is part of the IndieWeb), I recommend re-using one of your favicons. Since most browsers will fetch your favicons anyway, re-using them should be relatively harmless.
Layout
------
This is possibly the most subjective item I'm including, and the item with the most
exceptions. Consider it more of a weak suggestion than hard advice. Use your own
judgement.
This is possibly the most subjective item I'm including, and the item with the most exceptions. Consider it more of a weak suggestion than hard advice. Use your own judgement.
A simple layout looks good at a variety of window sizes, rendering responsive layout
changes unnecessary. Textual websites really don't need more than a single column;
readers should be able to scan a page top-to-bottom, left-to-right (or right-to-left,
depending on the locale) exactly once to read all its content. Verify this using the
horizontal-line test: mentally draw a horizontal line across your page, and make sure
it doesn't intersect more than one (1) item. Keeping a single-column layout that
doesn't require responsive layout changes ensures smooth window re-sizing.
A simple layout looks good at a variety of window sizes, rendering responsive layout changes unnecessary. Textual websites really don't need more than a single column; readers should be able to scan a page top-to-bottom, left-to-right (or right-to-left, depending on the locale) exactly once to read all its content. Verify this using the horizontal-line test: mentally draw a horizontal line across your page, and make sure it doesn't intersect more than one (1) item. Keeping a single-column layout that doesn't require responsive layout changes ensures smooth window re-sizing.
Exceptions exist: one or two very simple responsive changes won't hurt. For example,
the only responsive layout change on [my website](https://seirdy.one/) is a single
CSS declaration to switch between inline and multi-line navigation links at the top
of the page:
Exceptions exist: one or two very simple responsive changes won't hurt. For example, the only responsive layout change on [my website](https://seirdy.one/) is a single CSS declaration to switch between inline and multi-line navigation links at the top of the page:
```
@media (min-width: 32rem) {
@ -251,97 +150,51 @@ of the page:
### What about sidebars?
Sidebars are probably unnecessary, and can be quite annoying to readers who re-size
windows frequently. This is especially true for tiling window manager users like me:
we frequently shrink windows to a fraction of their original size. When this happens
on a website with a sidebar, one of two things happens:
Sidebars are probably unnecessary, and can be quite annoying to readers who re-size windows frequently. This is especially true for tiling window manager users like me: we frequently shrink windows to a fraction of their original size. When this happens on a website with a sidebar, one of two things happens:
1. The site's responsive design kicks in: the sidebar vanishes and its elements move
elsewhere. This can be quite CPU-heavy, as the browser has to both re-wrap the
text *and* handle a complex layout change. Frequent window re-sizers will
experience lag and battery loss, and might need a moment to figure out where
everything went.
2. The site doesn't use responsive design. The navbar and main content are now
squeezed together. Readers will probably close the page.
1. The site's responsive design kicks in: the sidebar vanishes and its elements move elsewhere. This can be quite CPU-heavy, as the browser has to both re-wrap the text _and_ handle a complex layout change. Frequent window re-sizers will experience lag and battery loss, and might need a moment to figure out where everything went.
2. The site doesn't use responsive design. The navbar and main content are now squeezed together. Readers will probably close the page.
Neither situation looks great.
### Sidebar alternatives
Common items in sidebars include article tags, an author bio, and an index of
entries; these aren't useful while reading an article. Consider putting them in the
article footer or--even better--dedicated pages. This does mean that readers will
have to navigate to a different page to see that content, but they probably prefer
things that way; almost nobody who clicked on "An opinionated list of best practices
for textual websites" did so because they wanted to read my bio.
Common items in sidebars include article tags, an author bio, and an index of entries; these aren't useful while reading an article. Consider putting them in the article footer or--even better--dedicated pages. This does mean that readers will have to navigate to a different page to see that content, but they probably prefer things that way; almost nobody who clicked on "An opinionated list of best practices for textual websites" did so because they wanted to read my bio.
Don't boost engagement by providing readers with information they didn't ask for;
earn engagement with good content, and let readers navigate to your other pages
*after* they've decided they want to read more.
Don't boost engagement by providing readers with information they didn't ask for; earn engagement with good content, and let readers navigate to your other pages _after_ they've decided they want to read more.
Testing
-------
If your site is simple enough, it should automatically handle the vast majority of
edge-cases. Different devices and browsers all have their quirks, but they generally
have one thing in common: they understand semantic, backward-compatible HTML.
If your site is simple enough, it should automatically handle the vast majority of edge-cases. Different devices and browsers all have their quirks, but they generally have one thing in common: they understand semantic, backward-compatible HTML.
In addition to standard testing, I recommend testing with unorthodox setups that are
unlikely to be found in the wild. If a website doesn't look good in one of these
tests, there's a good chance that it uses an advanced Web feature that can serve as a
point of failure in other cases. Simple sites should be able to look good in a
variety of situations out of the box.
In addition to standard testing, I recommend testing with unorthodox setups that are unlikely to be found in the wild. If a website doesn't look good in one of these tests, there's a good chance that it uses an advanced Web feature that can serve as a point of failure in other cases. Simple sites should be able to look good in a variety of situations out of the box.
Your page should easily pass the harshest of tests without any extra effort if its
HTML meets basic standards for well-written code (overlooking bad formatting and a
lack of comments). Even if you use a complex static site generator, the final HTML
should be simple, readable, and semantic.
Your page should easily pass the harshest of tests without any extra effort if its HTML meets basic standards for well-written code (overlooking bad formatting and a lack of comments). Even if you use a complex static site generator, the final HTML should be simple, readable, and semantic.
### Sample unorthodox tests
These tests start out pretty reasonable, but gradually get more insane as you go
down. Once again, use your judgement.
These tests start out pretty reasonable, but gradually get more insane as you go down. Once again, use your judgement.
1. Load just the HTML. No CSS, no images, etc. Try loading without inline CSS as
well for good measure.
2. Print out the site in black-and-white, preferably with a simple laser printer.
3. Test with a screen reader.
4. Test keyboard navigability with the tab key. Even without specifying tab indices,
tab selection should follow a logical order if you keep the layout simple.
5. Test in textual browsers: lynx, links, w3m, edbrowse, EWW, etc.
6. Read the (prettified/indented) HTML source itself and parse it with your brain.
See if anything seems illogical or unnecessary. Imagine giving someone a printout
of your page's `<body>` along with a whiteboard. If they have a basic knowledge
of HTML tags, would they be able to draw something resembling your website?
7. Test on something ridiculous: try your old e-reader's embedded browser, combine
an HTML-to-EPUB converter and an EPUB-to-PDF converter, or stack multiple
article-extraction utilities on top of each other. Be creative and enjoy breaking
your site. When something breaks, examine the breakage and see if you can fix it
by simplifying your page.
8. Build a time machine. Travel decades--or perhaps centuries--into the future. Keep
going forward until the WWW is breathing its last breath. Test your site on
future browsers. Figuring out how to transfer your files onto their computers
might take some time, but you have a time machine so that shouldn't be too hard.
When you finish, go back in time to [meet Benjamin
Franklin](https://xkcd.com/567/).
1. Load just the HTML. No CSS, no images, etc. Try loading without inline CSS as well for good measure.
2. Print out the site in black-and-white, preferably with a simple laser printer.
3. Test with a screen reader.
4. Test keyboard navigability with the tab key. Even without specifying tab indices, tab selection should follow a logical order if you keep the layout simple.
5. Test in textual browsers: lynx, links, w3m, edbrowse, EWW, etc.
6. Read the (prettified/indented) HTML source itself and parse it with your brain. See if anything seems illogical or unnecessary. Imagine giving someone a printout of your page's `<body>` along with a whiteboard. If they have a basic knowledge of HTML tags, would they be able to draw something resembling your website?
7. Test on something ridiculous: try your old e-reader's embedded browser, combine an HTML-to-EPUB converter and an EPUB-to-PDF converter, or stack multiple article-extraction utilities on top of each other. Be creative and enjoy breaking your site. When something breaks, examine the breakage and see if you can fix it by simplifying your page.
8. Build a time machine. Travel decades--or perhaps centuries--into the future. Keep going forward until the WWW is breathing its last breath. Test your site on future browsers. Figuring out how to transfer your files onto their computers might take some time, but you have a time machine so that shouldn't be too hard. When you finish, go back in time to [meet Benjamin Franklin](https://xkcd.com/567/).
I'm still on step 7, trying to find new ways to break this page. If you come up with
a new test, please [share it](mailto:~seirdy/seirdy.one-comments@lists.sr.ht).
I'm still on step 7, trying to find new ways to break this page. If you come up with a new test, please [share it](mailto:~seirdy/seirdy.one-comments@lists.sr.ht).
Other places to check out
-------------------------
The [250kb club](https://250kb.club/) gathers websites at or under 250kb, and also
rewards websites that have a high ratio of content size to total size.
The [250kb club](https://250kb.club/) gathers websites at or under 250kb, and also rewards websites that have a high ratio of content size to total size.
The [10KB Club](https://10kbclub.com/) does the same with a 10kb homepage budget
(excluding favicons and webmanifest icons). It also has guidelines for
noteworthiness, to avoid low-hanging fruit like mostly-blank pages.
The [10KB Club](https://10kbclub.com/) does the same with a 10kb homepage budget (excluding favicons and webmanifest icons). It also has guidelines for noteworthiness, to avoid low-hanging fruit like mostly-blank pages.
Also see [Motherfucking Website](https://motherfuckingwebsite.com/). Motherfucking
Website inspired several unofficial sequels that tried to gently improve upon it. My
favorite is [Best Motherfucking Website](https://bestmotherfucking.website/).
Also see [Motherfucking Website](https://motherfuckingwebsite.com/). Motherfucking Website inspired several unofficial sequels that tried to gently improve upon it. My favorite is [Best Motherfucking Website](https://bestmotherfucking.website/).
The [WebBS calculator](https://www.webbloatscore.com/) compares a page's size with the size of a PNG screenshot of the full page content, encouraging site owners to minimize the ratio of the two.
The [WebBS calculator](https://www.webbloatscore.com/) compares a page's size with
the size of a PNG screenshot of the full page content, encouraging site owners to
minimize the ratio of the two.