mirror of
https://git.sr.ht/~seirdy/seirdy.one
synced 2024-11-27 14:12:09 +00:00
Compare commits
9 commits
05fcac13aa
...
85d84910f8
Author | SHA1 | Date | |
---|---|---|---|
|
85d84910f8 | ||
|
8d244920e5 | ||
|
c5d19f4560 | ||
|
470345cdd1 | ||
|
968a93c7c7 | ||
|
58ce01b5d0 | ||
|
c9a6b4de37 | ||
|
111c49d1aa | ||
|
2b88317a46 |
10 changed files with 115 additions and 54 deletions
|
@ -175,4 +175,6 @@ This site is featured in some cool directories.
|
||||||
- [Webrings Fanlisting](https://fanlistings.nickifaulk.com/webrings/)
|
- [Webrings Fanlisting](https://fanlistings.nickifaulk.com/webrings/)
|
||||||
- [Gossip's Web](https://gossipsweb.net/personal-websites)
|
- [Gossip's Web](https://gossipsweb.net/personal-websites)
|
||||||
- [Nixers](https://github.com/nixers-projects/sites/wiki/List-of-nixers.net-user-sites)
|
- [Nixers](https://github.com/nixers-projects/sites/wiki/List-of-nixers.net-user-sites)
|
||||||
|
- [Nerd Listings](https://nerdlistings.info/category/personalsites/)
|
||||||
|
- [Ye Olde Blogroll](https://blogroll.org/)
|
||||||
|
- [LinkLane](https://www.linklane.net/)
|
||||||
|
|
23
content/notes/user-choice-progressive-enhancement.md
Normal file
23
content/notes/user-choice-progressive-enhancement.md
Normal file
|
@ -0,0 +1,23 @@
|
||||||
|
---
|
||||||
|
title: "User choice and progressive enhancement"
|
||||||
|
date: 2022-06-27T14:31:21-07:00
|
||||||
|
replyURI: "https://lobste.rs/s/mvw7zd/details_as_menu#c_lxwjcc"
|
||||||
|
replyTitle: "These browsers are mostly used by tech-savvy people"
|
||||||
|
replyType: "SocialMediaPosting"
|
||||||
|
replyAuthor: "Matt Campbell"
|
||||||
|
replyAuthorURI: "https://mwcampbell.us/blog/"
|
||||||
|
---
|
||||||
|
Many users who need a significant degree of privacy will also be excluded, as JavaScript is a major fingerprinting vector. Users of the Tor Browser are encouraged to stick to the "Safest" security level. That security level disables dangerous features such as:
|
||||||
|
|
||||||
|
- Just-in-time compilation
|
||||||
|
- JavaScript
|
||||||
|
- SVG
|
||||||
|
- MathML
|
||||||
|
- Graphite font rendering
|
||||||
|
- automatic media playback
|
||||||
|
|
||||||
|
Even if it were purely a choice in user hands, I'd still feel inclined to respect it. Of course, accommodating needs should come before accommodation of wants; that doesn't mean we should ignore the latter.
|
||||||
|
|
||||||
|
Personally, I'd rather treat any features that disadvantage a marginalized group as a last-resort. I prefer selectively using `<details>` as it was intended---as a disclosure widget---and would rather come up with other creative alternatives to accordion patterns. Only when there's no other option would I try a progressively-enhanced JS-enabled option. I'm actually a little ambivalent about `<details>` since I try to support alternative browser engines (beyond Blink, Gecko, and WebKit). Out of [all the independent engines I've tried](https://seirdy.one/site-design/#compatibility-statement), the only one that supports `<details>` seems to be Servo.
|
||||||
|
|
||||||
|
JavaScript, CSS, and---where sensible---images are optional enhancements to pages. For "apps", progressive enhancement still applies: something informative (e.g. a skeleton with an error message explaining why JS is required) should be shown and overridden with JS.
|
|
@ -30,6 +30,9 @@ These are large engines that pass all my standard tests and more.
|
||||||
* GMX search
|
* GMX search
|
||||||
* (discontinued) Runnaroo
|
* (discontinued) Runnaroo
|
||||||
* SAPO (Portuguese interface, can work with English results)
|
* SAPO (Portuguese interface, can work with English results)
|
||||||
|
* DSearch
|
||||||
|
* A host of other engines using Programmable Search Engine's client-side scripts.
|
||||||
|
=> https://developers.google.com/custom-search/ Programmable Search Engine
|
||||||
|
|
||||||
2. Bing: the runner-up. Allows submitting pages and sitemaps for crawling without login using the IndexNow API. Its index powers many other engines:
|
2. Bing: the runner-up. Allows submitting pages and sitemaps for crawling without login using the IndexNow API. Its index powers many other engines:
|
||||||
|
|
||||||
|
@ -137,12 +140,14 @@ These engines fail badly at a few important tests. Otherwise, they seem to work
|
||||||
|
|
||||||
Results from these search engines don’t seem at all useful.
|
Results from these search engines don’t seem at all useful.
|
||||||
|
|
||||||
|
* Yessle: seems new; allows page submission by pasting a page into the search box. Index is really small but it crawls new sites quickly. Claims to be private.
|
||||||
* Bloopish: extremely quick to update its index; site submissions show up in seconds. Unfortunately, its index only contains a few thousand documents (under 100 thousand at the time of writing). It's growing fast: if you search for a term, it'll start crawling related pages and grow its index.
|
* Bloopish: extremely quick to update its index; site submissions show up in seconds. Unfortunately, its index only contains a few thousand documents (under 100 thousand at the time of writing). It's growing fast: if you search for a term, it'll start crawling related pages and grow its index.
|
||||||
* YaCy: community-made index; slow. Results are awful/irrelevant, but can be useful for intranet or custom search.
|
* YaCy: community-made index; slow. Results are awful/irrelevant, but can be useful for intranet or custom search.
|
||||||
* Scopia: only seems to be available via the MetaGer metasearch engine after turning off Bing and news results. Tiny index, very low-quality.
|
* Scopia: only seems to be available via the MetaGer metasearch engine after turning off Bing and news results. Tiny index, very low-quality.
|
||||||
* Artado Search: Primarily Turkish, but it also seems to support English results. Like Plumb, it uses client-side JS to fetch results from existing engines (Google, Bing, Yahoo, Petal, and others); like MetaGer, it has an option to use its own independent index. Results from its index are almost always empty. Very simple queries ("twitter", "wikipedia", "reddit") give some answers. Supports site submission and crowdsourced instant answers.
|
* Artado Search: Primarily Turkish, but it also seems to support English results. Like Plumb, it uses client-side JS to fetch results from existing engines (Google, Bing, Yahoo, Petal, and others); like MetaGer, it has an option to use its own independent index. Results from its index are almost always empty. Very simple queries ("twitter", "wikipedia", "reddit") give some answers. Supports site submission and crowdsourced instant answers.
|
||||||
* Active Search Results: very poor quality
|
* Active Search Results: very poor quality
|
||||||
|
|
||||||
|
=> https://www.yessle.com/ Yessle
|
||||||
=> https://search.aibull.io/ Bloopish
|
=> https://search.aibull.io/ Bloopish
|
||||||
=> https://metager.org MetaGer
|
=> https://metager.org MetaGer
|
||||||
=> https://www.artadosearch.com/ Artado Search
|
=> https://www.artadosearch.com/ Artado Search
|
||||||
|
@ -189,15 +194,11 @@ These indexing search engines don’t have a Google-like “ask me anything” e
|
||||||
|
|
||||||
### Small/non-commercial Web
|
### Small/non-commercial Web
|
||||||
|
|
||||||
* Wiby: I love this one. It focuses on smaller independent sites that capture the spirit of the “early” web. It’s more focused on “discovering” new interesting pages that match a set of keywords than finding a specific resources. I like to think of Wiby as an engine for surfing, not searching. Runnaroo occasionally features a hit from Wiby. If you have a small site or blog that isn’t very “commercial”, consider submitting it to the index.
|
|
||||||
* Marginalia Search: A recent addition similar to Wiby, and *my favorite entry on this page*. It has its own crawler but is strongly biased towards non-commercial, personal, and/or minimal sites. It's a great response to the increasingly SEO-spam-filled SERPs of GBY. Partially powers Teclis, which in turn partially powers Kagi. Update 2022-05-27: Marginalia.nu is now open source
|
* Marginalia Search: A recent addition similar to Wiby, and *my favorite entry on this page*. It has its own crawler but is strongly biased towards non-commercial, personal, and/or minimal sites. It's a great response to the increasingly SEO-spam-filled SERPs of GBY. Partially powers Teclis, which in turn partially powers Kagi. Update 2022-05-27: Marginalia.nu is now open source
|
||||||
* Search My Site: Similar to Wiby, but only indexes user-submitted personal and independent sites. It optionally supports IndieAuth.
|
|
||||||
* Teclis: A project by the creator of Kagi search. Uses its own crawler that measures content blocked by uBlock Origin, and extracts content with the open-source article scrapers Trafilatura and Readability.js. This is quite an interesting approach: tracking blocked elements discourages tracking and advertising; using Trafilatura and Readability.js encourages the use of semantic HTML and Semantic Web standards such as microformats, microdata, and RDFa. It claims to also use some results from Marginalia.
|
* Teclis: A project by the creator of Kagi search. Uses its own crawler that measures content blocked by uBlock Origin, and extracts content with the open-source article scrapers Trafilatura and Readability.js. This is quite an interesting approach: tracking blocked elements discourages tracking and advertising; using Trafilatura and Readability.js encourages the use of semantic HTML and Semantic Web standards such as microformats, microdata, and RDFa. It claims to also use some results from Marginalia.
|
||||||
|
|
||||||
=> https://wiby.me wiby.me
|
|
||||||
=> https://search.marginalia.nu/ search.marginalia.nu
|
=> https://search.marginalia.nu/ search.marginalia.nu
|
||||||
=> https://memex.marginalia.nu/log/58-marginalia-open-source.gmi Announcement: marginalia.nu goes open source
|
=> https://memex.marginalia.nu/log/58-marginalia-open-source.gmi Announcement: marginalia.nu goes open source
|
||||||
=> https://searchmysite.net Search My site
|
|
||||||
=> http://teclis.com/ Teclis
|
=> http://teclis.com/ Teclis
|
||||||
|
|
||||||
### Site finders
|
### Site finders
|
||||||
|
@ -278,9 +279,17 @@ I’m unable to evaluate these engines properly since I don’t speak the necess
|
||||||
=> https:solofield.net SOLOFIELD
|
=> https:solofield.net SOLOFIELD
|
||||||
=> https://kaz.kz/ kaz.kz
|
=> https://kaz.kz/ kaz.kz
|
||||||
|
|
||||||
### Unknown
|
## Almost qualified
|
||||||
|
|
||||||
I'm unable to determine if these engines are independent; help would be appreciated!
|
These engines come close enough to passing my inclusion criteria that I felt I had to mention them. They all display original organic results that you can't find on other engines, and maintain their own indexes. Unfortunately, they don't quite pass.
|
||||||
|
|
||||||
|
* Wiby: I love this one. It focuses on smaller independent sites that capture the spirit of the “early” web. It’s more focused on “discovering” new interesting pages that match a set of keywords than finding a specific resources. I like to think of Wiby as an engine for surfing, not searching. Runnaroo occasionally featured a hit from Wiby (Runnaroo has since shut down). If you have a small site or blog that isn’t very “commercial”, consider submitting it to the index. Does not qualify because it seems to be powered only by user-submitted sites; it doesn't try to "crawl the Web".
|
||||||
|
* Mwmbl: like YaCy, it's an open-source engine whose crawling is community-driven. Users can install a Firefox addon to crawl pages in its backlog. Unfortunately, it doesn't qualify because it only crawls pages linked by hand-picked sites (e.g. Wikipedia, GitHub, domains that rank well on Hacker News). The crawl-depth is "1", so it doesn't crawl the whole Web (yet).
|
||||||
|
* Search My Site: Similar to Marginalia and Teclis, but only indexes user-submitted personal and independent sites. It optionally supports IndieAuth. Its API powers this site's search results; try it out using the search bar at the bottom of this page. Does not qualify because it's limited to user-submitted and/or hand-picked sites.
|
||||||
|
|
||||||
|
=> https://wiby.me wiby.me
|
||||||
|
=> https://mwmbl.org/ Mwmbl
|
||||||
|
=> https://searchmysite.net Search My site
|
||||||
|
|
||||||
## Misc
|
## Misc
|
||||||
|
|
||||||
|
@ -375,11 +384,11 @@ Here's an oversimplified example to illustrate what I'm looking for: imagine som
|
||||||
I'm willing to make two exceptions:
|
I'm willing to make two exceptions:
|
||||||
|
|
||||||
1. Engines in the "semi-independent" section may mix results that do meet the aforementioned criteria with results that do not.
|
1. Engines in the "semi-independent" section may mix results that do meet the aforementioned criteria with results that do not.
|
||||||
2. Engines in the "non-generalist" section may use indexes primarily made of user-submitted sites, rather than focusing primarily on sites discovered organically through crawling.
|
2. Engines in the "almost qualified" section may use indexes primarily made of user-submitted or hand-picked sites, rather than focusing primarily on sites discovered organically through crawling.
|
||||||
|
|
||||||
The reason the second exception exists is that while user submissions don't represent automatic crawling, they do at least inform the engine of new interesting websites that it had not previously discovered; these websites can then be shown to other users. That's fundamentally what an alternative web index needs to achieve.
|
The reason the second exception exists is that while user submissions don't represent automatic crawling, they do at least inform the engine of new interesting websites that it had not previously discovered; these websites can then be shown to other users. That's fundamentally what an alternative web index needs to achieve.
|
||||||
|
|
||||||
I'm not willing to budge on my "no hand-picked websites" rule. Hand-picked sites will be ignored, whether your engine fetches content through their APIs or crawls and scrapes their content. It's fine to use hand-picked websites as starting points for your crawler (Wikipedia is a popular option).
|
I'm not usually willing to budge on my "no hand-picked websites" rule. Hand-picked sites will be ignored, whether your engine fetches content through their APIs or crawls and scrapes their content. It's fine to use hand-picked websites as starting points for your crawler (Wikipedia is a popular option).
|
||||||
|
|
||||||
I only consider search engines that focus on link results for webpages. Image search engines are out of scope, though I *might* consider some other engines for non-generalist search (e.g., Semantic Scholar finds PDFs rather than webpages).
|
I only consider search engines that focus on link results for webpages. Image search engines are out of scope, though I *might* consider some other engines for non-generalist search (e.g., Semantic Scholar finds PDFs rather than webpages).
|
||||||
|
|
||||||
|
|
|
@ -60,6 +60,10 @@ These are large engines that pass all my standard tests and more.
|
||||||
|
|
||||||
- [SAPO](https://www.sapo.pt/) (Portuguese interface, can work with English results)
|
- [SAPO](https://www.sapo.pt/) (Portuguese interface, can work with English results)
|
||||||
|
|
||||||
|
- [DSearch](https://www.dsearch.com/)
|
||||||
|
|
||||||
|
- A host of other engines using [Programmable Search Engine's](https://developers.google.com/custom-search/) client-side scripts.
|
||||||
|
|
||||||
- Bing: the runner-up. Allows submitting pages and sitemaps for crawling without login using [the IndexNow API](https://www.indexnow.org/). Its index powers many other engines:
|
- Bing: the runner-up. Allows submitting pages and sitemaps for crawling without login using [the IndexNow API](https://www.indexnow.org/). Its index powers many other engines:
|
||||||
|
|
||||||
- Yahoo (and its sibling engine, One­Search)
|
- Yahoo (and its sibling engine, One­Search)
|
||||||
|
@ -82,7 +86,7 @@ These are large engines that pass all my standard tests and more.
|
||||||
- Givero
|
- Givero
|
||||||
- Swisscows
|
- Swisscows
|
||||||
- Fireball
|
- Fireball
|
||||||
- You.com[^6]
|
- You.com[^6]
|
||||||
- Partially powers MetaGer by default; this can be turned off
|
- Partially powers MetaGer by default; this can be turned off
|
||||||
- At this point, I mostly stopped adding Bing-<wbr />based search engines. There are just too many.
|
- At this point, I mostly stopped adding Bing-<wbr />based search engines. There are just too many.
|
||||||
|
|
||||||
|
@ -142,6 +146,8 @@ These engines fail badly at a few important tests. Otherwise, they seem to work
|
||||||
|
|
||||||
Results from these search engines don't seem at all useful.
|
Results from these search engines don't seem at all useful.
|
||||||
|
|
||||||
|
- [Yessle](https://www.yessle.com/): seems new; allows page submission by pasting a page into the search box. Index is really small but it crawls new sites quickly. Claims to be private.
|
||||||
|
|
||||||
- [Bloopish](https://search.aibull.io/): extremely quick to update its index; site submissions show up in seconds. Unfortunately, its index only contains a few thousand documents (under 100 thousand at the time of writing). It's growing fast: if you search for a term, it'll start crawling related pages and grow its index.
|
- [Bloopish](https://search.aibull.io/): extremely quick to update its index; site submissions show up in seconds. Unfortunately, its index only contains a few thousand documents (under 100 thousand at the time of writing). It's growing fast: if you search for a term, it'll start crawling related pages and grow its index.
|
||||||
|
|
||||||
- YaCy: community-made index; slow. Results are awful/irrelevant, but can be useful for intranet or custom search.
|
- YaCy: community-made index; slow. Results are awful/irrelevant, but can be useful for intranet or custom search.
|
||||||
|
@ -179,11 +185,7 @@ These indexing search engines don’t have a Google-like “ask me anything” e
|
||||||
|
|
||||||
### Small or non-commercial Web
|
### Small or non-commercial Web
|
||||||
|
|
||||||
- Wiby: [wiby.me](https://wiby.me) and [wiby.org](https://wiby.org): I love this one. It focuses on smaller independent sites that capture the spirit of the "early" web. It's more focused on "discovering" new interesting pages that match a set of keywords than finding a specific resources. I like to think of Wiby as an engine for surfing, not searching. Runnaroo occasionally features a hit from Wiby. If you have a small site or blog that isn't very "commercial", consider submitting it to the index.
|
- [Marginalia Search](https://search.marginalia.nu/): _My favorite entry on this page_. It has its own crawler but is strongly biased towards non-commercial, personal, and/or minimal sites. It's a great response to the increasingly SEO-spam-filled SERPs of GBY. Partially powers Teclis, which in turn partially powers Kagi. <ins cite="https://memex.marginalia.nu/log/58-marginalia-open-source.gmi" datetime="2022-05-28T14:09:00-07:00">Update 2022-05-28: [Marginalia.nu is now open source.](https://memex.marginalia.nu/log/58-marginalia-open-source.gmi)</ins>
|
||||||
|
|
||||||
- [Marginalia Search](https://search.marginalia.nu/): A recent addition similar to Wiby, and _my favorite entry on this page_. It has its own crawler but is strongly biased towards non-commercial, personal, and/or minimal sites. It's a great response to the increasingly SEO-spam-filled SERPs of GBY. Partially powers Teclis, which in turn partially powers Kagi. <ins cite="https://memex.marginalia.nu/log/58-marginalia-open-source.gmi" datetime="2022-05-28T14:09:00-07:00">Update 2022-05-28: [Marginalia.nu is now open source.](https://memex.marginalia.nu/log/58-marginalia-open-source.gmi)</ins>
|
|
||||||
|
|
||||||
- [Search My Site](https://searchmysite.net): Similar to Wiby, but only indexes user-submitted personal and independent sites. It optionally supports IndieAuth.
|
|
||||||
|
|
||||||
- [Teclis](http://teclis.com/): A project by the creator of Kagi search. Uses its own crawler that measures content blocked by uBlock Origin, and extracts content with the open-source article scrapers Trafilatura and Readability.js. This is quite an interesting approach: tracking blocked elements discourages tracking and advertising; using Trafilatura and Readability.js encourages the use of semantic HTML and Semantic Web standards such as [microformats](https://microformats.org/), [microdata](https://html.spec.whatwg.org/multipage/microdata.html), and [RDFa](https://www.w3.org/TR/rdfa-primer/). It claims to also use some results from Marginalia.
|
- [Teclis](http://teclis.com/): A project by the creator of Kagi search. Uses its own crawler that measures content blocked by uBlock Origin, and extracts content with the open-source article scrapers Trafilatura and Readability.js. This is quite an interesting approach: tracking blocked elements discourages tracking and advertising; using Trafilatura and Readability.js encourages the use of semantic HTML and Semantic Web standards such as [microformats](https://microformats.org/), [microdata](https://html.spec.whatwg.org/multipage/microdata.html), and [RDFa](https://www.w3.org/TR/rdfa-primer/). It claims to also use some results from Marginalia.
|
||||||
|
|
||||||
|
@ -260,6 +262,17 @@ I'm unable to evaluate these engines properly since I don't speak the necessary
|
||||||
|
|
||||||
- [kaz.kz](http://kaz.kz): Kazakh and Russian, with a focus on "Kazakhstan's segment of the Internet"
|
- [kaz.kz](http://kaz.kz): Kazakh and Russian, with a focus on "Kazakhstan's segment of the Internet"
|
||||||
|
|
||||||
|
Almost qualified
|
||||||
|
----------------
|
||||||
|
|
||||||
|
These engines come close enough to passing my inclusion criteria that I felt I had to mention them. They all display original organic results that you can't find on other engines, and maintain their own indexes. Unfortunately, they don't quite pass.
|
||||||
|
|
||||||
|
- Wiby: [wiby.me](https://wiby.me) and [wiby.org](https://wiby.org): I love this one. It focuses on smaller independent sites that capture the spirit of the "early" web. It's more focused on "discovering" new interesting pages that match a set of keywords than finding a specific resources. I like to think of Wiby as an engine for surfing, not searching. Runnaroo occasionally features a hit from Wiby. If you have a small site or blog that isn't very "commercial", consider submitting it to the index. Does not qualify because it seems to be powered only by user-submitted sites; it doesn't try to "crawl the Web".
|
||||||
|
|
||||||
|
- [Mwmbl](https://mwmbl.org/): like YaCy, it's an open-source engine whose crawling is community-driven. Users can install a Firefox addon to crawl pages in its backlog. Unfortunately, it doesn't qualify because it only crawls pages linked by hand-picked sites (e.g. Wikipedia, GitHub, domains that rank well on Hacker News). The crawl-depth is "1", so it doesn't crawl the whole Web (yet).
|
||||||
|
|
||||||
|
- [Search My Site](https://searchmysite.net): Similar to Marginalia and Teclis, but only indexes user-submitted personal and independent sites. It optionally supports IndieAuth. Its API powers this site's search results; try it out using the search bar at the bottom of this page. Does not qualify because it's limited to user-submitted and/or hand-picked sites.
|
||||||
|
|
||||||
Misc
|
Misc
|
||||||
----
|
----
|
||||||
|
|
||||||
|
@ -336,11 +349,11 @@ Here's an oversimplified example to illustrate what I'm looking for: imagine som
|
||||||
I'm willing to make two exceptions:
|
I'm willing to make two exceptions:
|
||||||
|
|
||||||
1. Engines in the "semi-independent" section may mix results that do meet the aforementioned criteria with results that do not.
|
1. Engines in the "semi-independent" section may mix results that do meet the aforementioned criteria with results that do not.
|
||||||
2. Engines in the "non-generalist" section may use indexes primarily made of user-submitted sites, rather than focusing primarily on sites discovered organically through crawling.
|
2. Engines in the "almost qualified" section may use indexes primarily made of user-submitted or hand-picked sites, rather than focusing primarily on sites discovered organically through crawling.
|
||||||
|
|
||||||
The reason the second exception exists is that while user submissions don't represent automatic crawling, they do at least inform the engine of new interesting websites that it had not previously discovered; these websites can then be shown to other users. That's fundamentally what an alternative web index needs to achieve.
|
The reason the second exception exists is that while user submissions don't represent automatic crawling, they do at least inform the engine of new interesting websites that it had not previously discovered; these websites can then be shown to other users. That's fundamentally what an alternative web index needs to achieve.
|
||||||
|
|
||||||
I'm not willing to budge on my "no hand-picked websites" rule. Hand-picked sites will be ignored, whether your engine fetches content through their APIs or crawls and scrapes their content. It's fine to use hand-picked websites as starting points for your crawler (Wikipedia is a popular option).
|
I'm not usually willing to budge on my "no hand-picked websites" rule. Hand-picked sites will be ignored, whether your engine fetches content through their APIs or crawls and scrapes their content. It's fine to use hand-picked websites as starting points for your crawler (Wikipedia is a popular option).
|
||||||
|
|
||||||
I only consider search engines that focus on link results for webpages. Image search engines are out of scope, though I _might_ consider some other engines for non-generalist search (e.g., Semantic Scholar finds PDFs rather than webpages).
|
I only consider search engines that focus on link results for webpages. Image search engines are out of scope, though I _might_ consider some other engines for non-generalist search (e.g., Semantic Scholar finds PDFs rather than webpages).
|
||||||
|
|
||||||
|
|
|
@ -1019,6 +1019,8 @@ Be sure to test your hyphens with NVDA or Windows Narrator: these screen readers
|
||||||
|
|
||||||
=> https://github.com/nvaccess/nvda/issues/9343 NVDA issue 9343: NVDA isn't ignoring soft hyphens properly
|
=> https://github.com/nvaccess/nvda/issues/9343 NVDA issue 9343: NVDA isn't ignoring soft hyphens properly
|
||||||
|
|
||||||
|
The best place to insert a hyphen is between compound words. For example, splitting "Firefighter" into "Fire-fighter" is quite safe. Beyond that, try listening to hyphenated words in NVDA to ensure they remain clear.
|
||||||
|
|
||||||
### Keeping text together
|
### Keeping text together
|
||||||
|
|
||||||
Soft hyphens are great for splitting up text, but some text should stay together. The phrase "10 cm", for instance, would flow poorly if "10" and "cm" appeared on separate lines. Splitting text becomes especially painful on narrow viewports. A non-breaking space keeps the surrounding text from being re-flowed. Use the " " HTML entity:
|
Soft hyphens are great for splitting up text, but some text should stay together. The phrase "10 cm", for instance, would flow poorly if "10" and "cm" appeared on separate lines. Splitting text becomes especially painful on narrow viewports. A non-breaking space keeps the surrounding text from being re-flowed. Use the " " HTML entity:
|
||||||
|
|
|
@ -817,7 +817,7 @@ Screenshot of the <a href="https://github.com/nihui/waifu2x-ncnn-vulkan/issues">
|
||||||
Someone using the GitHub issues interface for the first time will struggle to identify interactive regions and discern whether they trigger navigations or different actions.
|
Someone using the GitHub issues interface for the first time will struggle to identify interactive regions and discern whether they trigger navigations or different actions.
|
||||||
|
|
||||||
Image optimization {#image-optimization}
|
Image optimization {#image-optimization}
|
||||||
----------------------------
|
------------------
|
||||||
|
|
||||||
Some image optimization tools I use:
|
Some image optimization tools I use:
|
||||||
|
|
||||||
|
@ -1042,7 +1042,7 @@ Users employing machine translation will not benefit from your soft hyphens, so
|
||||||
|
|
||||||
Where long inline `<code>` elements can trigger horizontal scrolling, consider a scrollable `<pre>` element instead. Making a single element horizontally scrollable is far better than making the entire page scrollable in two dimensions. Hard-wrap code blocks so that they won't horizontally scroll in most widescreen desktop browsers.
|
Where long inline `<code>` elements can trigger horizontal scrolling, consider a scrollable `<pre>` element instead. Making a single element horizontally scrollable is far better than making the entire page scrollable in two dimensions. Hard-wrap code blocks so that they won't horizontally scroll in most widescreen desktop browsers.
|
||||||
|
|
||||||
Be sure to test your hyphens with NVDA or Windows Narrator: these screen readers' pronunciation of words can be disrupted by poorly-placed hyphens. Balancing the need to adapt to narrow screens against the need to sound correctly to a screen reader is a complex matter. At least, it will be until [NVDA bug 9343](https://github.com/nvaccess/nvda/issues/9343) gets resolved.
|
Be sure to test your hyphens with NVDA or Windows Narrator: these screen readers' pronunciation of words can be disrupted by poorly-placed hyphens. Balancing the need to adapt to narrow screens against the need to sound correctly to a screen reader is a complex matter.[^20] The best place to insert a hyphen is between compound words. For example, splitting "Firefighter" into "Fire-fighter" is quite safe. Beyond that, try listening to hyphenated words in NVDA to ensure they remain clear.
|
||||||
|
|
||||||
### Keeping text together
|
### Keeping text together
|
||||||
|
|
||||||
|
@ -1156,7 +1156,7 @@ Line spacing (leading) is at least space-and-a-half within paragraphs, and parag
|
||||||
{{</quotation>}}
|
{{</quotation>}}
|
||||||
|
|
||||||
Non-<wbr />browsers: reading mode {#non-browsers-reading-mode}
|
Non-<wbr />browsers: reading mode {#non-browsers-reading-mode}
|
||||||
--------------------------------------
|
---------------------------------
|
||||||
|
|
||||||
Fully standards-compliant browsers aren't the only programs people use. They also use "reading mode" tools and services.
|
Fully standards-compliant browsers aren't the only programs people use. They also use "reading mode" tools and services.
|
||||||
|
|
||||||
|
@ -1177,7 +1177,7 @@ Again: avoid catering to non-standard implementations' quirks, especially undocu
|
||||||
Reading modes aren't the only non-browser user agents out there. Plain-text feed readers and link previewers are some other options. I singled out reading modes because of their widespread adoption and value. Decide which other kinds of agents are important to you (if any), and see if they expose a hole in your semantics.
|
Reading modes aren't the only non-browser user agents out there. Plain-text feed readers and link previewers are some other options. I singled out reading modes because of their widespread adoption and value. Decide which other kinds of agents are important to you (if any), and see if they expose a hole in your semantics.
|
||||||
|
|
||||||
Machine translation {#machine-translation}
|
Machine translation {#machine-translation}
|
||||||
------------------------
|
-------------------
|
||||||
|
|
||||||
Believe it or not, the entire world doesn't speak your website's languages. Browsers like Chromium, Microsoft Edge, and Safari have integrated machine translation to translate entire pages. Users can also leverage online website translators such as Google Translate or Bing. These "webpage translators" are far more complex than their plain-text predecessors.
|
Believe it or not, the entire world doesn't speak your website's languages. Browsers like Chromium, Microsoft Edge, and Safari have integrated machine translation to translate entire pages. Users can also leverage online website translators such as Google Translate or Bing. These "webpage translators" are far more complex than their plain-text predecessors.
|
||||||
|
|
||||||
|
@ -1216,7 +1216,7 @@ Machine translators often skip `aria-label` and `aria-description`. For this rea
|
||||||
Microsoft Edge is the only browser I know of to adjust text-direction during translation, but it breaks when faced with inline `<code>` and `<span>` elements.
|
Microsoft Edge is the only browser I know of to adjust text-direction during translation, but it breaks when faced with inline `<code>` and `<span>` elements.
|
||||||
|
|
||||||
In­accessible default stylesheets {#inaccessible-default-stylesheets}
|
In­accessible default stylesheets {#inaccessible-default-stylesheets}
|
||||||
-----------------------------------------------
|
-------------------------------------
|
||||||
|
|
||||||
Simple sites should err on the side of respecting default stylesheets. With rare exceptions, there are only two times I feel comfortable overriding default stylesheets:
|
Simple sites should err on the side of respecting default stylesheets. With rare exceptions, there are only two times I feel comfortable overriding default stylesheets:
|
||||||
|
|
||||||
|
@ -1278,7 +1278,7 @@ On one hand, users who need enhanced focus visibility may override the default f
|
||||||
|
|
||||||
The WCAG [Success Criterion 2.4.12](https://w3c.github.io/wcag/guidelines/22/#focus-appearance-enhanced) recommends making focus indicators 2 px thick. While this success criterion is only AAA-level, it's easy enough to meet and beneficial enough to others that we should all meet it.
|
The WCAG [Success Criterion 2.4.12](https://w3c.github.io/wcag/guidelines/22/#focus-appearance-enhanced) recommends making focus indicators 2 px thick. While this success criterion is only AAA-level, it's easy enough to meet and beneficial enough to others that we should all meet it.
|
||||||
|
|
||||||
You can use `:focus` and `:focus-visible` to highlight selected and keyboard-focused elements, respectively. Take care to only alter styling, not behavior: only keyboard-focusable elements should receive outlines. Modern browser stylesheets use `:focus-visible` instead of `:focus`; old browsers only support `:focus` and re-style a subset of focusable elements. Your stylesheets should do the same, to match browser behavior.[^20]
|
You can use `:focus` and `:focus-visible` to highlight selected and keyboard-focused elements, respectively. Take care to only alter styling, not behavior: only keyboard-focusable elements should receive outlines. Modern browser stylesheets use `:focus-visible` instead of `:focus`; old browsers only support `:focus` and re-style a subset of focusable elements. Your stylesheets should do the same, to match browser behavior.[^21]
|
||||||
|
|
||||||
{{<codefigure>}}
|
{{<codefigure>}}
|
||||||
|
|
||||||
|
@ -1356,7 +1356,7 @@ Screen readers on touch screen devices are also quite different from their deskt
|
||||||
|
|
||||||
Screen reader implementations often skip punctuation marks like the exclamation point ("!"). Ensure that meaning doesn't rely too heavily on such punctuation.
|
Screen reader implementations often skip punctuation marks like the exclamation point ("!"). Ensure that meaning doesn't rely too heavily on such punctuation.
|
||||||
|
|
||||||
Screen readers have varying levels of verbosity. The default verbosity level doesn't always convey inline emphasis, such as `<em>`, `<code>`, or `<strong>`. Ensure that your meaning carries through without these semantics.[^21]
|
Screen readers have varying levels of verbosity. The default verbosity level doesn't always convey inline emphasis, such as `<em>`, `<code>`, or `<strong>`. Ensure that your meaning carries through without these semantics.[^22]
|
||||||
|
|
||||||
Default verbosity does, however, convey symbols and emoji. Use symbols and emoji judiciously, since they can get pretty noisy if you aren't careful. Use `aria-labelledby` on symbols when appropriate; I used labels to mark my footnote backlinks, which would otherwise be read as <samp>right arrow curving left</samp>. If you have to use a symbol or emoji, first test how assistive technologies announce it; the emoji name may not communicate what you expect.
|
Default verbosity does, however, convey symbols and emoji. Use symbols and emoji judiciously, since they can get pretty noisy if you aren't careful. Use `aria-labelledby` on symbols when appropriate; I used labels to mark my footnote backlinks, which would otherwise be read as <samp>right arrow curving left</samp>. If you have to use a symbol or emoji, first test how assistive technologies announce it; the emoji name may not communicate what you expect.
|
||||||
|
|
||||||
|
@ -1385,7 +1385,7 @@ No matter how simple a page is, I don't think simplicity eliminates the need for
|
||||||
|
|
||||||
Automated tests---especially accessibility tests---are a supplement to manual tests, not a replacement for them. Think of them as time-savers that bring up issues for further research, containing both false positives and false negatives.
|
Automated tests---especially accessibility tests---are a supplement to manual tests, not a replacement for them. Think of them as time-savers that bring up issues for further research, containing both false positives and false negatives.
|
||||||
|
|
||||||
These are the tools I use regularly. I've deliberately excluded tools that would be redundant.[^22]
|
These are the tools I use regularly. I've deliberately excluded tools that would be redundant.[^23]
|
||||||
|
|
||||||
|
|
||||||
[Nu HTML checker](https://validator.nu/)
|
[Nu HTML checker](https://validator.nu/)
|
||||||
|
@ -1398,7 +1398,7 @@ These are the tools I use regularly. I've deliberately excluded tools that would
|
||||||
: An auditing tool by Google that uses the DevTools protocol in any Chromium-based browser. Skip the "Access­ibility" category, since it just runs a subset of axe-core's audits. The most useful audit is the tap target size check in its "SEO" category. Note that your `sandbox` CSP directive will need to include `allow-scripts` for it to function.
|
: An auditing tool by Google that uses the DevTools protocol in any Chromium-based browser. Skip the "Access­ibility" category, since it just runs a subset of axe-core's audits. The most useful audit is the tap target size check in its "SEO" category. Note that your `sandbox` CSP directive will need to include `allow-scripts` for it to function.
|
||||||
|
|
||||||
[Webhint](https://webhint.io/)
|
[Webhint](https://webhint.io/)
|
||||||
: Similar to Lighthouse. Again, you can ignore the accessibility audits if you already use axe-core. I personally disagree with some of its hints: the "unneeded HTTP headers" hint ignores the fact that the CSP can have an effect on non-hypertext assets, the "HTTP cache" hint has an unreasonable bias against caching HTML, and the "Correct `Content-Type` header" recommends charset attributes a bit too agg­ressively.[^23]
|
: Similar to Lighthouse. Again, you can ignore the accessibility audits if you already use axe-core. I personally disagree with some of its hints: the "unneeded HTTP headers" hint ignores the fact that the CSP can have an effect on non-hypertext assets, the "HTTP cache" hint has an unreasonable bias against caching HTML, and the "Correct `Content-Type` header" recommends charset attributes a bit too agg­ressively.[^24]
|
||||||
|
|
||||||
[IBM Equal Access Accessibility Checker](https://www.ibm.com/able/toolkit/verify/automated/)
|
[IBM Equal Access Accessibility Checker](https://www.ibm.com/able/toolkit/verify/automated/)
|
||||||
: Has a scope similar to axe-core. Its "Sensory Characteristics" audit seems unique.
|
: Has a scope similar to axe-core. Its "Sensory Characteristics" audit seems unique.
|
||||||
|
@ -1428,7 +1428,7 @@ These tests begin reasonably, but gradually grow absurd. Once again, use your ju
|
||||||
|
|
||||||
1. Test in all three major browser engines: Blink, Gecko, and WebKit.
|
1. Test in all three major browser engines: Blink, Gecko, and WebKit.
|
||||||
|
|
||||||
2. Evaluate the heaviness and complexity of your scripts (if any) by testing with your browser's <abbr title="just-in-time">JIT</abbr> compilation disabled.[^24]
|
2. Evaluate the heaviness and complexity of your scripts (if any) by testing with your browser's <abbr title="just-in-time">JIT</abbr> compilation disabled.[^25]
|
||||||
|
|
||||||
3. Test using the Tor Browser's safest security level enabled (disables JS and other features).
|
3. Test using the Tor Browser's safest security level enabled (disables JS and other features).
|
||||||
|
|
||||||
|
@ -1456,7 +1456,7 @@ These tests begin reasonably, but gradually grow absurd. Once again, use your ju
|
||||||
|
|
||||||
15. Try printing out your page in black-and-white from an unorthodox graphical browser.
|
15. Try printing out your page in black-and-white from an unorthodox graphical browser.
|
||||||
|
|
||||||
16. Download your webpage and test how multiple word processors render and generate PDFs from it.[^25]
|
16. Download your webpage and test how multiple word processors render and generate PDFs from it.[^26]
|
||||||
|
|
||||||
17. Combine conversion tools. Combine an HTML-<wbr />to-<wbr />EPUB converter and an EPUB-<wbr />to-<wbr />PDF converter, or stack multiple article-extraction utilities. Be creative and enjoy breaking your site. When something breaks, examine the breakage and see if it's caused by an issue in your markup, or a CSS feature with an equivalent alternative.
|
17. Combine conversion tools. Combine an HTML-<wbr />to-<wbr />EPUB converter and an EPUB-<wbr />to-<wbr />PDF converter, or stack multiple article-extraction utilities. Be creative and enjoy breaking your site. When something breaks, examine the breakage and see if it's caused by an issue in your markup, or a CSS feature with an equivalent alternative.
|
||||||
|
|
||||||
|
@ -1511,7 +1511,7 @@ This article is, and will probably always be, an ongoing work-in-progress. Some
|
||||||
* Rules for descriptive link text, for screen reader navigation and for user-agents that display links as footnotes (e.g. some textual browsers with the `dump` flag).
|
* Rules for descriptive link text, for screen reader navigation and for user-agents that display links as footnotes (e.g. some textual browsers with the `dump` flag).
|
||||||
|
|
||||||
Conclusion {#conclusion}
|
Conclusion {#conclusion}
|
||||||
---------------
|
----------
|
||||||
|
|
||||||
There are so many ways to read a page; authors typically cater only to the mainstream ones. Some ways to read a page I covered include:
|
There are so many ways to read a page; authors typically cater only to the mainstream ones. Some ways to read a page I covered include:
|
||||||
|
|
||||||
|
@ -1645,15 +1645,17 @@ A special thanks goes out to GothAlice for the questions she answered in <samp>#
|
||||||
|
|
||||||
Mobile users wishing to temporarily switch modes have to stop, change their navigation mode, perform a navigation gesture, and switch back. Mobile users trying to skim an article don't always find this worth the effort and sometimes stick to heading-based navigation even when a different mode would be optimal.
|
Mobile users wishing to temporarily switch modes have to stop, change their navigation mode, perform a navigation gesture, and switch back. Mobile users trying to skim an article don't always find this worth the effort and sometimes stick to heading-based navigation even when a different mode would be optimal.
|
||||||
|
|
||||||
[^20]: If you'd like to learn more, {{<mention-work itemtype="BlogPosting">}}{{< cited-work name="A guide to designing accessible, WCAG-compliant focus indicators" url="https://www.sarasoueidan.com/blog/focus-indicators/" extraName="headline" >}} by {{< indieweb-person url="https://www.sarasoueidan.com/" first-name="Sara" last-name="Soueidan" url="https://www.sarasoueidan.com/" itemprop="author">}}{{</mention-work>}} has far more details on making accessible focus indicators.
|
[^20]: At least, it will be until [NVDA bug 9343](https://github.com/nvaccess/nvda/issues/9343) gets resolved.
|
||||||
|
|
||||||
[^21]: Screen readers aren't alone here. Several programs strip inline formatting: certain feed readers, search result snippets, and textual browsers invoked with the `-dump` flag are some examples I use every day.
|
[^21]: If you'd like to learn more, {{<mention-work itemtype="BlogPosting">}}{{< cited-work name="A guide to designing accessible, WCAG-compliant focus indicators" url="https://www.sarasoueidan.com/blog/focus-indicators/" extraName="headline" >}} by {{< indieweb-person url="https://www.sarasoueidan.com/" first-name="Sara" last-name="Soueidan" url="https://www.sarasoueidan.com/" itemprop="author">}}{{</mention-work>}} has far more details on making accessible focus indicators.
|
||||||
|
|
||||||
[^22]: I excluded PageSpeed Insights and GTMetrix since those are mostly covered by Lighthouse. I excluded Security Headers, since its approach seems to be recommending headers regardless of whether or not they are necessary. It penalizes forgoing the <code>Permissions-<wbr />Policy</code> header even if the CSP blocks script loading and execution; see [Security Headers issue #103](https://github.com/securityheaders/securityheaders-bugs/issues/103). I personally find the <code>Permissions-<wbr />Policy</code> header quite problematic, as I noted in August 2021 on [webappsec-permissions-policy issue #189](https://github.com/w3c/webappsec-permissions-policy/issues/189#issuecomment-904783021).
|
[^22]: Screen readers aren't alone here. Several programs strip inline formatting: certain feed readers, search result snippets, and textual browsers invoked with the `-dump` flag are some examples I use every day.
|
||||||
|
|
||||||
[^23]: My site caches HTML documents for ten minutes and caches the RSS feed for several hours. I disagree with webhint's recommendations against this: cache durations should be based on request rates and how often a resource is updated. I also disagree with some of its `content-type` recommendations: you don't need to declare UTF-8 charsets for SVG content-type headers if the SVG is ASCII-only and called from a UTF-8 HTML document. You gain nothing but header bloat by doing so.
|
[^23]: I excluded PageSpeed Insights and GTMetrix since those are mostly covered by Lighthouse. I excluded Security Headers, since its approach seems to be recommending headers regardless of whether or not they are necessary. It penalizes forgoing the <code>Permissions-<wbr />Policy</code> header even if the CSP blocks script loading and execution; see [Security Headers issue #103](https://github.com/securityheaders/securityheaders-bugs/issues/103). I personally find the <code>Permissions-<wbr />Policy</code> header quite problematic, as I noted in August 2021 on [webappsec-permissions-policy issue #189](https://github.com/w3c/webappsec-permissions-policy/issues/189#issuecomment-904783021).
|
||||||
|
|
||||||
[^24]: Consider disabling the JIT for your normal browsing too; doing so removes whole classes of vulnerabilities. In Firefox, navigate to <samp>about:<wbr />config</samp> and toggle some flags under <code>javascript<wbr />.options</code>.
|
[^24]: My site caches HTML documents for ten minutes and caches the RSS feed for several hours. I disagree with webhint's recommendations against this: cache durations should be based on request rates and how often a resource is updated. I also disagree with some of its `content-type` recommendations: you don't need to declare UTF-8 charsets for SVG content-type headers if the SVG is ASCII-only and called from a UTF-8 HTML document. You gain nothing but header bloat by doing so.
|
||||||
|
|
||||||
|
[^25]: Consider disabling the JIT for your normal browsing too; doing so removes whole classes of vulnerabilities. In Firefox, navigate to <samp>about:<wbr />config</samp> and toggle some flags under <code>javascript<wbr />.options</code>.
|
||||||
|
|
||||||
<figure itemprop="hasPart" itemscope="" itemtype="https://schema.org/SoftwareSourceCode">
|
<figure itemprop="hasPart" itemscope="" itemtype="https://schema.org/SoftwareSourceCode">
|
||||||
<figcaption>
|
<figcaption>
|
||||||
|
@ -1669,6 +1671,6 @@ A special thanks goes out to GothAlice for the questions she answered in <samp>#
|
||||||
|
|
||||||
In Chromium and derivatives, run the browser with `--js-flags='--jitless'`; in the Tor Browser, set the security level to "Safer".
|
In Chromium and derivatives, run the browser with `--js-flags='--jitless'`; in the Tor Browser, set the security level to "Safer".
|
||||||
|
|
||||||
[^25]: LibreOffice can also render HTML but has extremely limited support for CSS. OnlyOffice seems to work best, but doesn't load images. If your page is CSS-optional, it should look fine in both.
|
[^26]: LibreOffice can also render HTML but has extremely limited support for CSS. OnlyOffice seems to work best, but doesn't load images. If your page is CSS-optional, it should look fine in both.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -70,7 +70,7 @@ I regularly run axe-core and the IBM Equal Access Accessibility Checker on every
|
||||||
Compatibility statement
|
Compatibility statement
|
||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
The website is built on well structured, semantic HTML (including [WAI-ARIA](https://www.w3.org/WAI/standards-guidelines/aria/) and [DPUB-ARIA](https://www.w3.org/TR/dpub-aria-1.1/) where appropriate), enhanced with CSS for styling. The website does **not** rely on modern development practices such as CSS Grid, Flexbox, SVG 2, Web fonts, and JavaScript; this should improve support in older browsers such as Internet Explorer 11. No extra plugins or libraries should be required to view the website.
|
The website is built on well structured, semantic, [polygot XHTML5](https://www.w3.org/TR/html-polyglot/) (including [WAI-ARIA](https://www.w3.org/WAI/standards-guidelines/aria/) and [DPUB-ARIA](https://www.w3.org/TR/dpub-aria-1.1/) extensions where appropriate), enhanced with CSS for styling. The website does **not** rely on modern development practices such as CSS Grid, Flexbox, SVG 2, Web fonts, and JavaScript; this should improve support in older browsers such as Internet Explorer 11. No extra plugins or libraries should be required to view the website.
|
||||||
|
|
||||||
This site sticks to Web standards. I regularly run a local build of [the Nu HTML Checker](https://github.com/validator/validator), `xmllint`, and [html proofer](https://github.com/gjtorikian/html-proofer) on every page in my sitemap, and see no errors. I do [filter out false Nu positives](https://git.sr.ht/~seirdy/seirdy.one/tree/master/item/linter-configs/vnu_filter.jq) and report them upstream when I can.
|
This site sticks to Web standards. I regularly run a local build of [the Nu HTML Checker](https://github.com/validator/validator), `xmllint`, and [html proofer](https://github.com/gjtorikian/html-proofer) on every page in my sitemap, and see no errors. I do [filter out false Nu positives](https://git.sr.ht/~seirdy/seirdy.one/tree/master/item/linter-configs/vnu_filter.jq) and report them upstream when I can.
|
||||||
|
|
||||||
|
|
|
@ -1,12 +1,13 @@
|
||||||
name,prev,home,next
|
name,prev,home,next,random
|
||||||
Indieweb,https://xn--sr8hvo.ws/%F0%9F%98%A9%F0%9F%9A%A3%F0%9F%8D%91/previous,https://xn--sr8hvo.ws/,https://xn--sr8hvo.ws/%F0%9F%98%A9%F0%9F%9A%A3%F0%9F%8D%91/next
|
Indieweb,https://xn--sr8hvo.ws/%F0%9F%98%A9%F0%9F%9A%A3%F0%9F%8D%91/previous,https://xn--sr8hvo.ws/,https://xn--sr8hvo.ws/%F0%9F%98%A9%F0%9F%9A%A3%F0%9F%8D%91/next,null
|
||||||
Miniclub,https://miniclub.amongtech.cc/prev/seirdy.one,https://miniclub.amongtech.cc/,https://miniclub.amongtech.cc/next/seirdy.one
|
Retroweb,https://webri.ng/webring/retroweb/previous?index=3,https://indieseek.xyz/webring/,https://webri.ng/webring/retroweb/next?index=3,https://webri.ng/webring/retroweb/random
|
||||||
geekring,https://geekring.net/site/167/previous,https://geekring.net/,https://geekring.net/site/167/next
|
Miniclub,https://miniclub.amongtech.cc/prev/seirdy.one,https://miniclub.amongtech.cc/,https://miniclub.amongtech.cc/next/seirdy.one,https://miniclub.amongtech.cc/random
|
||||||
no js webring,https://nojs.sugarfi.dev/prev.php?url=https://seirdy.one,https://nojs.sugarfi.dev/,https://nojs.sugarfi.dev/next.php?url=https://seirdy.one
|
geekring,https://geekring.net/site/167/previous,https://geekring.net/,https://geekring.net/site/167/next,https://geekring.net/site/167/random
|
||||||
Fediring,https://fediring.net/previous?host=seirdy.one,https://fediring.net/,https://fediring.net/next?host=seirdy.one
|
no js webring,https://nojs.sugarfi.dev/prev.php?url=https://seirdy.one,https://nojs.sugarfi.dev/,https://nojs.sugarfi.dev/next.php?url=https://seirdy.one,https://nojs.sugarfi.dev/rand.php
|
||||||
Loop (JS),https://graycot.com/webring/loop-redirect.html?action=prev,https://graycot.com/webring/,https://graycot.com/webring/loop-redirect.html?action=next
|
Fediring,https://fediring.net/previous?host=seirdy.one,https://fediring.net/,https://fediring.net/next?host=seirdy.one,https://fediring.net/random
|
||||||
YesterWeb,https://webring.yesterweb.org/noJS/index.php?d=prev&url=https://seirdy.one/,https://yesterweb.org/webring/,https://webring.yesterweb.org/noJS/index.php?d=next&url=https://seirdy.one/
|
Loop (JS),https://graycot.com/webring/loop-redirect.html?action=prev,https://graycot.com/webring/,https://graycot.com/webring/loop-redirect.html?action=next,https://graycot.com/webring/loop-redirect.html?action=rand
|
||||||
Retronaut,https://webring.dinhe.net/prev/https://seirdy.one,https://webring.dinhe.net/,https://webring.dinhe.net/next/https://seirdy.one
|
YesterWeb,https://webring.yesterweb.org/noJS/index.php?d=prev&url=https://seirdy.one/,https://yesterweb.org/webring/,https://webring.yesterweb.org/noJS/index.php?d=next&url=https://seirdy.one/,https://webring.yesterweb.org/noJS/index.php?d=rand&url=https://seirdy.one/
|
||||||
Hotline,https://hotlinewebring.club/seirdy/previous,https://hotlinewebring.club,https://hotlinewebring.club/seirdy/next
|
Retronaut,https://webring.dinhe.net/prev/https://seirdy.one,https://webring.dinhe.net/,https://webring.dinhe.net/next/https://seirdy.one,null
|
||||||
Bucket (JS),https://webring.bucketfish.me/redirect.html?to=prev&name=seirdy,https://webring.bucketfish.me/,https://webring.bucketfish.me/redirect.html?to=next&name=seirdy
|
Hotline,https://hotlinewebring.club/seirdy/previous,https://hotlinewebring.club,https://hotlinewebring.club/seirdy/next,null
|
||||||
Devring,https://devring.club/sites/5/prev,https://devring.club,https://devring.club/sites/5/next
|
Bucket (JS),https://webring.bucketfish.me/redirect.html?to=prev&name=seirdy,https://webring.bucketfish.me/,https://webring.bucketfish.me/redirect.html?to=next&name=seirdy,null
|
||||||
|
Devring,https://devring.club/sites/5/prev,https://devring.club,https://devring.club/sites/5/next,https://devring.club/random
|
||||||
|
|
|
|
@ -27,6 +27,11 @@
|
||||||
<li>
|
<li>
|
||||||
<a href="{{- index $r 3 -}}" rel="nofollow ugc" referrerpolicy="{{ $refPol }}">Next {{ $webringName }} site</a>
|
<a href="{{- index $r 3 -}}" rel="nofollow ugc" referrerpolicy="{{ $refPol }}">Next {{ $webringName }} site</a>
|
||||||
</li>
|
</li>
|
||||||
|
{{- if ne (index $r 4) "null" }}
|
||||||
|
<li>
|
||||||
|
<a href="{{- index $r 4 -}}" rel="nofollow ugc" referrerpolicy="{{ $refPol }}">Random {{ $webringName }} site</a>
|
||||||
|
</li>
|
||||||
|
{{- end }}
|
||||||
</ol>
|
</ol>
|
||||||
</details>
|
</details>
|
||||||
</li>
|
</li>
|
||||||
|
|
|
@ -67,16 +67,20 @@ values_to_csv() {
|
||||||
# values for the GEORGE webring
|
# values for the GEORGE webring
|
||||||
george() {
|
george() {
|
||||||
printf 'GEORGE,'
|
printf 'GEORGE,'
|
||||||
curl -sSL --compressed 'https://george.gh0.pw/embed.cgi?seirdy' \
|
{
|
||||||
| htmlq -a href 'main p a' \
|
curl -sSL --compressed 'https://george.gh0.pw/embed.cgi?seirdy' \
|
||||||
| values_to_csv
|
| htmlq -a href 'main p a'
|
||||||
|
echo "null"
|
||||||
|
} | values_to_csv
|
||||||
}
|
}
|
||||||
|
|
||||||
endless_orbit() {
|
endless_orbit() {
|
||||||
printf 'Endless Orbit,'
|
printf 'Endless Orbit,'
|
||||||
curl -sSL --compressed https://linkyblog.neocities.org/onionring/onionring-variables.js \
|
{
|
||||||
| grep -C 1 https://seirdy.one/ \
|
curl -sSL --compressed https://linkyblog.neocities.org/onionring/onionring-variables.js \
|
||||||
| sd https://seirdy.one/ https://linkyblog.neocities.org/webring.html \
|
| grep -C 1 https://seirdy.one/
|
||||||
|
echo "'null',"
|
||||||
|
} | sd https://seirdy.one/ https://linkyblog.neocities.org/webring.html \
|
||||||
| sd "\n|'" '' | trim_trailing_comma
|
| sd "\n|'" '' | trim_trailing_comma
|
||||||
echo
|
echo
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in a new issue