Use partialCached to avoid any perf penalty.
TODO: limit the scope I pass to them.
I noticed a teeny tiny perf improvement after doing this, probably
because now some giant data structures only need to be generated once.
The RSS feeds use escaped HTML instead of XHTML, which improves
compatibility with certain feed readers (e.g. Microsoft Outlook).
Mention that Outlook uses its own weird engine for feed contents in my
web best practices article.
- Drop copyright symbol: I put it there because certain programs
explicitly look for it, but between rel=license, schema.org microdata,
and creative commons RDFa, I think scrapers should be covered.
- Update the theme-color and friends to work with my site's updated dark
theme.
The site now has polygot markup and can handle both XHTML5 and HTML5
parsing rules. My staging site will be XHTML but my main site will be
HTML5, just in case of parse errors.
If other tools (e.g. LightHouse) end up supporting XHTML5, I'll consider
switching the content-type to XHTML.
Add an RSS feed for notes. Next up, replacing the RSS navlink with a
page containing links to both my posts and notes RSS feeds. When I get
ATOM and WebSub, it'll have links ot those too.
Also fixed some typos and switched "Posted" to "Noted" in the context of
notes.
- Make webring links touch-friendly and accessible by using spaced-out
details elements.
- Make details elements touch-friendly by indicating interactive region
area and making summary padded.
- Sort featured posts by featured order.
- Ensure that at least one non-interactive tappable region exists on the
screen at all times, 48x48 px.
- microdata for CompleteDataFeed on /posts.html instead of DataFeed
- make the home link <strong> when it's the current page, just like the
other navlinks.
Order is significant for the ToC and post list so make them ordered.
I opted to make post-lists a reversed list, so I don't end up having
every post change its number every time I post.
Statically grab and include webmentions during Hugo builds, no JS
involved. Hugo supports making web requests and parsing the resulting
JSON, so there was no need to use an external program either.