Use a new branch of webmentiond that lets me pull in all webmentions for
all pages in a single JSON response
Before, Hugo would make one request to webmentiond per page to ask for
approved webmentions for that page. Sometimes, it makes two requests
because some pages used to have a different canonical location. In all,
it ended up making over 150 requests within a second or two. Webmentiond
can handle this for now, but this isn't sustainable: page count will
only increase with time. I wanted to have Hugo instead get all
webmentions for all pages in one cached request.
I recompiled webmentiond from
https://github.com/zerok/webmentiond/pull/65, which updates the API to
support admin access keys. The admin API allows pulling in all
webmentions for all pages, instead of pulling them in for one page at a
time.
Doing so requires getting a bearer token, so I had to manage a new CI
secret: the password for getting a token. I get the token in a shell
script (get-token.sh) and write it to a temporary file, then have Hugo
read the token from that file. The shell script gets the password using
either the CI secret (in CI) or using my password manager (on my
workstation).
TODO: support marginalia (mentions with fragments in their targets)
Add a new "Interactions" section to my pages that contain both
Syndication and Webmentions. Make the Syndication links u-syndication.
Make both these things children of the <article> h-entry.
This allows me to update a webmention URL if it breaks or changes and
the admin didn't bother putting a re-direct in place.
I shouldn't have needed to do this. Cool URLs don't change.
- Display reply content in webmentions, when it's available
- Truncate titles and redundant content from webmentions
- Add note on a11y issues regarding badly-formatted webmentions from
brid.gy's mastodon integration.
- Switch an abs link to a relative one
- Account for a site move
- Manually correct a couple dead links to point to the Wayback Machine
- Automatically switch some webmention links to the Wayback Machine
The site now has polygot markup and can handle both XHTML5 and HTML5
parsing rules. My staging site will be XHTML but my main site will be
HTML5, just in case of parse errors.
If other tools (e.g. LightHouse) end up supporting XHTML5, I'll consider
switching the content-type to XHTML.
- Stop using draft WAI-ARIA 1.3 that isn't supported yet
- Make in-page links focusable across shortcodes/partials
- Replace existing in-page heading anchor links with a more accessible
option.
- Make backlinks aria-labelledby instead of giving them an aria-label,
so they can be translated.