Allows better filtering and doesn't supress exit codes. Since I'm no
longer supressing exit codes, I had to handle them properly in
copy-file-to-xhtml.sh by using if-statements.
This also allowed me to skip the generation of an XHTML redirect page.
Requires silencing an HTML-only error that isn't actually an error, but
a hint that doesn't apply to polygot markup (warns against trailing
slashes which aren't a problem if you always use quotes).
- Avoid relative paths, so I can run them from any working directory.
- Make a dedicated curl-wrapping shell script instead of re-defining the
same alias everywhere.
- Support extended offline periods: allow get-webmentions.sh to fall
back to the cached copy of my webmentions for up to a day, and don't
accidentally destroy it; keep changes atomic.
- Verify that the fetched webmentions are legit before replacing the
cached ones.
- Make shellcheck happy about quoting in vnu.sh by passing the list of
files with xargs instead of a shell variable.
html-tidy takes care of some post-processing, rendering other
substitutions obsolete. Remove the obsolete regex substitutions.
Now that we did that, the remaining substitutions can be done with
vanilla POSIX or POSIX-Extended regular expressions. Replace sd with
sed, and group the substitutions together into one invocation instead of
multiple invocations piped together. This change speeds up
post-processing to be almost as fast as the initial build step.
Use a new branch of webmentiond that lets me pull in all webmentions for
all pages in a single JSON response
Before, Hugo would make one request to webmentiond per page to ask for
approved webmentions for that page. Sometimes, it makes two requests
because some pages used to have a different canonical location. In all,
it ended up making over 150 requests within a second or two. Webmentiond
can handle this for now, but this isn't sustainable: page count will
only increase with time. I wanted to have Hugo instead get all
webmentions for all pages in one cached request.
I recompiled webmentiond from
https://github.com/zerok/webmentiond/pull/65, which updates the API to
support admin access keys. The admin API allows pulling in all
webmentions for all pages, instead of pulling them in for one page at a
time.
Doing so requires getting a bearer token, so I had to manage a new CI
secret: the password for getting a token. I get the token in a shell
script (get-token.sh) and write it to a temporary file, then have Hugo
read the token from that file. The shell script gets the password using
either the CI secret (in CI) or using my password manager (on my
workstation).
TODO: support marginalia (mentions with fragments in their targets)
Left bc I quit trying to make a good first-party iframe alternative
that conformed to my site design standards while also imparting the
message of GEORGE as intended.
Whether I join or leave, GEORGE lives on. Whether GEORGE of the JUNGLE
or CURIOUS GEORGE, GEORGE is coming and GEORGE will be known to all as
as the one true GEORGE.
- Make xhtml and html alternates the same (we're polygot), cutting
static-compression time in half
- Make axe-ff run on local files, reducing some overhead.