diff --git a/content/notes/duckduckgo-and-bing.md b/content/notes/duckduckgo-and-bing.md index 78438fe..e41eff3 100644 --- a/content/notes/duckduckgo-and-bing.md +++ b/content/notes/duckduckgo-and-bing.md @@ -1,5 +1,5 @@ --- -title: "DuckDuckGo and Bing" +title: "DDG and Bing" date: 2022-06-02T20:59:38-07:00 replyURI: "https://www.librepunk.club/@penryn/108411423190214816" replyTitle: "how would html.duckduckgo.com fit into this?" @@ -12,7 +12,7 @@ I was referring to crawlers that build indexes for search engines to use. DuckDu DuckDuckGo and other engines that use Bing's commercial API have contractual arrangements that typically include a clause that says something like "don't you dare change our results, we don't want to create a competitor to Bing that has better results than us". Very few companies manage to negotiate an exception; DuckDuckGo is not one of those companies, to my knowledge. -So to answer your question: it's irrelevant. "html.duckduckgo.com" is a JS-free front-end to DuckDuckGo's backend, and mostly serves as a proxy to Bing results. +So to answer your question: it's irrelevant. "html.duckduckgo.com" is a JS-free front-end to DuckDuckGo's backend, and mostly serves as a proxy to Bing results. For the record, Google isn't any different when it comes to their API. That's why Ixquick shut down and pivoted to Startpage; Google wasn't happy with Ixquick integrating multiple sources. diff --git a/content/notes/prologue-to-the-meta-post.md b/content/notes/prologue-to-the-meta-post.md index 7da422d..aeef5fe 100644 --- a/content/notes/prologue-to-the-meta-post.md +++ b/content/notes/prologue-to-the-meta-post.md @@ -26,7 +26,7 @@ However, I still have a ways to go. Here's what I plan on adding: - Permalinks for bookmarks - Atom feed for bookmarks - Automatic POSSE of bookmarks to Fedi -- Time-period based pagination/navigation on notes/posts page. +- Time-period based pagination and navigation on notes/posts page. Once I finish the above, I'll be ready for a "meta" post. Some more tasks after that: diff --git a/content/notes/tuis-and-accessibility.md b/content/notes/tuis-and-accessibility.md index 31fe822..ad29bfa 100644 --- a/content/notes/tuis-and-accessibility.md +++ b/content/notes/tuis-and-accessibility.md @@ -2,7 +2,7 @@ title: "TUIs and accessibility" date: 2022-06-11T13:13:15-07:00 replyURI: "https://floss.social/@alcinnz/108460252689906224" -replyTitle: 'My understanding so far has been limited to "use Gettext…avoid NCurses"' +replyTitle: 'My understanding so far has been limited to use Gettext…avoid NCurses' replyType: "SocialMediaPosting" replyAuthor: "Adrian Cochrane" replyAuthorURI: "https://adrian.geek.nz/" diff --git a/content/posts/cli-best-practices.md b/content/posts/cli-best-practices.md index 7d24936..da0a11d 100644 --- a/content/posts/cli-best-practices.md +++ b/content/posts/cli-best-practices.md @@ -93,7 +93,7 @@ This is a non-exhaustive list of simple, baseline recommendations for designing 4. Be predictable. Users expect `git log` to print a commit log. Users do not expect `git log` to make network connections, write something to their filesystem, etc. Try to only perform the minimum functionality suggested by the command. Naturally, this disqualifies opt-out telemetry. -### Documen­tation {#documentation} +### Documentation {#documentation} 1. Write man pages! Man pages have a standardized,[^5] predictable, searchable format. Many screen-reader users actually have special scripts to make it easy to read man pages. A man page is also trivial to convert to HTML for people who prefer web-based documentation.[^6] If your utility has a config file with special syntax or vocabulary, write a dedicated man page for it in section 5 and mention it in a "SEE ALSO" section.[^7] @@ -121,13 +121,13 @@ $ moac - {{}} -### Mis­cellan­eous {#miscellaneous} +### Miscellaneous {#miscellaneous} 1. Either delegate output wrapping to the terminal, or detect the number of columns and format output to fit. Prefer the former when given a choice, especially when the output is not a TTY. 2. Be safe. If a tool makes irreversible changes to the outside environment, add a `--dry-run` or equivalent option. -More opinion­ated consider­ations {#more-opinionated-considerations} +More opinionated considerations {#more-opinionated-considerations} ----------------------------------------- These considerations are far more subjective, debatable, and deserving of skepticism than the previous recommendations. There's a reason I call this section "considerations", not "recommendations". Exceptions abound; I'm here to present information, not to think on your behalf. @@ -191,7 +191,7 @@ References and further reading
  1. -{{}}Harini Sampath, Alice Merrick, and Andrew Macvean. 2021. _{{}}. In CHI Conference on Human Factors in Computing Systems (CHI '21), May 8–13, 2021, Yokohama, Japan._ ACM, New York, NY, USA 10 Pages. DOI 10.1145/3411764.3445544{{}} +{{}}Harini Sampath, Alice Merrick, and Andrew Macvean. 2021. _{{}}. In CHI Conference on Human Factors in Computing Systems (CHI '21), May 8–13, 2021, Yokohama, Japan._ ACM, New York, NY, USA 10 Pages. DOI 10.1145/3411764.3445544{{}}
  2. {{}}{{}}. Alastair Campbell, Michael Cooper, Andrew Kirkpatrick. W3C. .{{}} diff --git a/content/posts/floss-security.md b/content/posts/floss-security.md index ff24cec..9c0e83c 100644 --- a/content/posts/floss-security.md +++ b/content/posts/floss-security.md @@ -112,7 +112,7 @@ For more information, we turn to [**core dumps**](https://en.wikipedia.org/wiki/ #### Dynamic analysis example: Zoom -In 2020, Zoom Video Comm­unications came under scrutiny for marketing its "Zoom" software as a secure, end-to-end encrypted solution for video conferencing. Zoom's documentation claimed that it used "AES-256" encryption. Without source code, did we have to take the docs at their word? +In 2020, Zoom Video Communications came under scrutiny for marketing its "Zoom" software as a secure, end-to-end encrypted solution for video conferencing. Zoom's documentation claimed that it used "AES-256" encryption. Without source code, did we have to take the docs at their word? [The Citizen Lab](https://citizenlab.ca/) didn't. In April 2020, it published [a report](https://citizenlab.ca/2020/04/move-fast-roll-your-own-crypto-a-quick-look-at-the-confidentiality-of-zoom-meetings/) revealing critical flaws in Zoom's encryption. It utilized Wireshark and [mitmproxy](https://mitmproxy.org/) to analyze networking activity, and inspected core dumps to learn about its encryption implementation. The Citizen Lab's researchers found that Zoom actually used an incredibly flawed implementation of a weak version of AES-128 (ECB mode), and easily bypassed it. @@ -210,7 +210,7 @@ I readily concede to several points in favor of source availability from a secur - It is certainly possible to notice a vulnerability in source code. Excluding low-hanging fruit typically caught by static code analysis and peer review, it's not the main way most vulnerabilities are found nowadays (thanks to {{}} for [reminding me about what source analysis does accomplish](https://lemmy.ml/post/167321/comment/117774)). -- Software as a Service can be incredibly difficult to analyze, as we typically have little more than the ability to query a server. Servers don't send core dumps, server-side binaries, or trace logs for analysis. Further­more, it's difficult to verify which software a server is running.[^14] For services that require trusting a server, access to the server-side software is important from both a security and a user-freedom perspective +- Software as a Service can be incredibly difficult to analyze, as we typically have little more than the ability to query a server. Servers don't send core dumps, server-side binaries, or trace logs for analysis. Furthermore, it's difficult to verify which software a server is running.[^14] For services that require trusting a server, access to the server-side software is important from both a security and a user-freedom perspective Most of this post is written with the assumption that binaries are inspectable and traceable. Binary obfuscation and some forms of content protection/DRM violate this assumption and actually do make analysis more difficult. @@ -225,7 +225,7 @@ Whether or not the source code is available for software does not change how ins Both Patience and {{}} argue that given the above points, a project whose goal is maximum security would release code. Strictly speaking, I agree. Good intentions don't imply good results, but they can _supplement_ good results to provide some trust in a project's future. -Con­clusion {#conclusion} +Conclusion {#conclusion} --------------- I've gone over some examples of how analyzing a software's security properties need not depend on source code, and vulnerability discovery in both FLOSS and in proprietary software uses source-agnostic techniques. Dynamic and static black-box techniques are powerful tools that work well from user-space (Zoom) to kernel-space (Linux) to low-level components like Intel ME+AMT. Source code enables the vulnerability-fixing process but has limited utility for the evaluation/discovery process. diff --git a/content/posts/keeping-platforms-open.md b/content/posts/keeping-platforms-open.md index 51049b1..8d96113 100644 --- a/content/posts/keeping-platforms-open.md +++ b/content/posts/keeping-platforms-open.md @@ -56,7 +56,7 @@ Compare the situation with email: despite Gmail's dominance, other email provide XMPP is still alive and well, but its current popularity is a fraction of what it once was. -### Implemen­tation clout {#implementation-clout} +### Implementation clout {#implementation-clout} Standards are a form of agreements made to ensure compatibility between implementations. Such agreements need to be agreed upon by the implementations themselves. When one implementation grows dominant, so too does its leverage in the decision-making process over shared standards. Too much dominance can create a monoculture in which the dominant implementation is the only implementation that conforms to the spec. @@ -70,7 +70,7 @@ Since there aren't any third-party clients and servers that can replace the offi I don't think that Matrix is going to become a fully closed platform anytime soon; the blog post ["On Privacy versus Freedom"](https://matrix.org/blog/2020/01/02/on-privacy-versus-freedom/) seems to put it on the right side of the closed/open spectrum. Clients like [gomuks](https://github.com/tulir/gomuks) and [FluffyChat](https://fluffychat.im/) seem to keep up with Element well enough to serve as partial replacements. I do, however, find its current state problematic and much closer to "closed" on the closed/open spectrum than XMPP, IRC, and email. -### Un­standard­ized feature creep {#unstandardized-feature-creep} +### Un­standardized feature creep {#unstandardized-feature-creep} Platforms are more than their protocols. Different implementations have unique behavior to distinguish themselves. Problems arise when dominant implementations' unique unstandardized features grow past a certain point to make a closed superset of an open platform. diff --git a/content/posts/layered-content-blocking.md b/content/posts/layered-content-blocking.md index 82b2f6c..2c9702f 100644 --- a/content/posts/layered-content-blocking.md +++ b/content/posts/layered-content-blocking.md @@ -70,8 +70,8 @@ One example I find particularly interesting: a friend of mine has been working o Several more examples are available in [uBlock Origin's resource library](https://github.com/gorhill/uBlock/wiki/Resources-Library). Much of this functionality would be unavailable to Manifest V3 extensions in Chromium. -Fre­quently-asked questions {#frequently-asked-questions} -------------------------------- +Frequently-asked questions {#frequently-asked-questions} +-------------------------- I'll update this section as I collect feedback. Watch this space. diff --git a/content/posts/search-engines-with-own-indexes.md b/content/posts/search-engines-with-own-indexes.md index bfb30af..4182738 100644 --- a/content/posts/search-engines-with-own-indexes.md +++ b/content/posts/search-engines-with-own-indexes.md @@ -56,9 +56,9 @@ These are large engines that pass all my standard tests and more. - [GMX Search](https://search.gmx.com/web), run by a popular German email provider. - - (discon­tinued) Runnaroo + - (discontinued) Runnaroo - - [SAPO](https://www.sapo.pt/) (Portu­guese interface, can work with English results) + - [SAPO](https://www.sapo.pt/) (Portuguese interface, can work with English results) - Bing: the runner-up. Allows submitting pages and sitemaps for crawling without login using [the IndexNow API](https://www.indexnow.org/). Its index powers many other engines: @@ -89,7 +89,7 @@ These are large engines that pass all my standard tests and more. - Yandex: originally a Russian search engine, it now has an English version. Some Russian results bleed into its English site. Like Bing, it allows submitting pages and sitemaps for crawling using the IndexNow API. Powers: - Epic Search (went paid-only as of June 2021) - - Occasion­ally powers DuckDuck­Go's link results instead of Bing. + - Occasionally powers DuckDuck­Go's link results instead of Bing. - [Mojeek](https://www.mojeek.com/): Seems privacy-oriented with a large index containing billions of pages. Quality isn't at GBY's level, but it’s not bad either. If I had to use Mojeek as my default general search engine, I'd live. Partially powers [eTools.ch](https://www.etools.ch/). At this moment, _I think that Mojeek is the best alternative to GBY_ for general search. @@ -117,7 +117,7 @@ These engines fail badly at a few important tests. Otherwise, they seem to work - [seekport](http://www.seekport.com/): The interface is in German but it supports searching in English just fine. The default language is selected by your locale. It's really good considering its small index; it hasn't heard of less common terms (e.g. "Seirdy"), but it's able to find relevant results in other tests. -- [Exalead](https://www.exalead.com/search/): slow, quality is hit-and-miss. Its indexer claims to crawl the DMOZ directory, which has since shut down and been replaced by the [Curlie](https://curlie.org) directory. No relevant results for "Oppen­heimer" and some other history-related queries. Allows submitting individual URLs for indexing, but requires solving a Google reCAPTCHA and entering an email address. +- [Exalead](https://www.exalead.com/search/): slow, quality is hit-and-miss. Its indexer claims to crawl the DMOZ directory, which has since shut down and been replaced by the [Curlie](https://curlie.org) directory. No relevant results for "Oppenheimer" and some other history-related queries. Allows submitting individual URLs for indexing, but requires solving a Google reCAPTCHA and entering an email address. - [ExactSeek](https://www.exactseek.com/): small index, disproportionately dominated by big sites. Failed multiple tests. Allows submitting individual URLs for crawling, but requires entering an email address and receiving a newsletter. Webmaster tools seem to heavily push for paid SEO options. It also powers SitesOnDisplay and [Blog-search.com](https://www.blog-search.com). @@ -161,7 +161,7 @@ Results from these search engines don't seem at all useful. Engines in this category fall back to GBY when their own indexes don't have enough results. As their own indexes grow, some claim that this should happen less often. -- [Brave Search](https://search.brave.com/): Many tests (including all the tests I listed in the "Methodology" section) resulted results identical to Google, revealed by a side-by-side comparison with Google, Startpage, and a Searx instance with only Google enabled. Brave claims that this is due to how Cliqz (the discon­tinued engine acquired by Brave) used query logs to build its page models and was optimized to match Google.[^7] The index is independent, but optimizing against Google resulted in too much similarity for the real benefit of an independent index to show. Furthermore, many queries have Bing results mixed in; users can click an "info" button to see the percentage of results that came from its own index. The independent percentage is typically quite high (often close to 100% independent) but can drop for advanced queries. +- [Brave Search](https://search.brave.com/): Many tests (including all the tests I listed in the "Methodology" section) resulted results identical to Google, revealed by a side-by-side comparison with Google, Startpage, and a Searx instance with only Google enabled. Brave claims that this is due to how Cliqz (the discontinued engine acquired by Brave) used query logs to build its page models and was optimized to match Google.[^7] The index is independent, but optimizing against Google resulted in too much similarity for the real benefit of an independent index to show. Furthermore, many queries have Bing results mixed in; users can click an "info" button to see the percentage of results that came from its own index. The independent percentage is typically quite high (often close to 100% independent) but can drop for advanced queries. - [Plumb](https://plumb.one/): Almost all queries return no results; when this happens, it falls back to Google. It's fairly transparent about the fallback process, but I'm concerned about _how_ it does this: it loads Google's Custom Search scripts from `cse.google.com` onto the page to do a client-side Google search. This can be mitigated by using a browser addon to block `cse.google.com` from loading any scripts. Plumb claims that this is a temporary measure while its index grows, and they're planning on getting rid of this. Allows submitting URLs, but requires solving an hCaptcha. This engine is very new; hopefully as it improves, it could graduate from this section. Its Chief Product Officer [previously founded](https://archive.is/oVAre) the Gibiru search engine which shares the same affiliates and (for now) the same index; the indexes will diverge with time. @@ -373,7 +373,7 @@ When building webpages, authors need to consider the barriers to entry for a new Try a "bad" engine from lower in the list. It might show you utter crap. But every garbage heap has an undiscovered treasure. I'm sure that some hidden gems you'll find will be worth your while. Let's add some serendipity to the SEO-filled Web. -Ac­know­ledge­ments {#acknowledgements} +Acknow­ledgements {#acknowledgements} ------------------------------- Some of this content came from the [Search Engine Map](https://www.searchenginemap.com/) and [Search Engine Party](https://searchengine.party/). A few web directories also proved useful. diff --git a/content/posts/website-best-practices.gmi b/content/posts/website-best-practices.gmi index 72180cf..c7b787d 100644 --- a/content/posts/website-best-practices.gmi +++ b/content/posts/website-best-practices.gmi @@ -1015,6 +1015,10 @@ Users employing machine translation will not benefit from your soft hyphens, so Where long inline "" elements can trigger horizontal scrolling, consider a scrollable "
    " element instead. Making a single element horizontally scrollable is far better than making the entire page scrollable in two dimensions. Hard-wrap code blocks so that they won't horizontally scroll in most widescreen desktop browsers.
     
    +Be sure to test your hyphens with NVDA or Windows Narrator: these screen readers' pronunciation of words can be disrupted by poorly-placed hyphens. Balancing the need to adapt to narrow screens against the need to sound correctly to a screen reader is a complex matter. At least, it will be until NVDA gains the ability to recognize hyphens:
    +
    +=> https://github.com/nvaccess/nvda/issues/9343 NVDA issue 9343: NVDA isn't ignoring soft hyphens properly
    +
     ### Keeping text together
     
     Soft hyphens are great for splitting up text, but some text should stay together. The phrase "10 cm", for instance, would flow poorly if "10" and "cm" appeared on separate lines. Splitting text becomes especially painful on narrow viewports. A non-breaking space keeps the surrounding text from being re-flowed. Use the " " HTML entity:
    diff --git a/content/posts/website-best-practices.md b/content/posts/website-best-practices.md
    index d71e82c..bdb95e9 100644
    --- a/content/posts/website-best-practices.md
    +++ b/content/posts/website-best-practices.md
    @@ -41,9 +41,9 @@ Intro­duction {#introduction}
     
     I realize not everybody's going to ditch the Web and switch to Gemini or Gopher today (that'll take, like, a month at the longest). Until that happens, here's a non-exhaustive, highly-opinionated list of best practices for websites that focus primarily on text. I don't expect anybody to fully agree with the list; nonetheless, the article should have at least some useful information for any web content author or front-end web developer.
     
    -My primary focus is [inclusive design](https://100daysofa11y.com/2019/12/03/accommodation-versus-inclusive-design/). Specifically, I focus on supporting _under­represented ways to read a page_. Not all users load a page in a common web-browser and navigate effortlessly with their eyes and hands. Authors often neglect people who read through accessibility tools, tiny viewports, machine translators, "reading mode" implemen­tations, the Tor network, printouts, hostile networks, and uncommon browsers, to name a few. I list more niches in [the conclusion](#conclusion). Compatibility with so many niches sounds far more daunting than it really is: if you only selectively override browser defaults and use plain-old, semantic HTML (POSH), you've done half of the work already.
    +My primary focus is [inclusive design](https://100daysofa11y.com/2019/12/03/accommodation-versus-inclusive-design/). Specifically, I focus on supporting _underrepresented ways to read a page_. Not all users load a page in a common web-browser and navigate effortlessly with their eyes and hands. Authors often neglect people who read through accessibility tools, tiny viewports, machine translators, "reading mode" implementations, the Tor network, printouts, hostile networks, and uncommon browsers, to name a few. I list more niches in [the conclusion](#conclusion). Compatibility with so many niches sounds far more daunting than it really is: if you only selectively override browser defaults and use plain-old, semantic HTML (POSH), you've done half of the work already.
     
    -One of the core ideas behind the flavor of inclusive design I present is inclusivity by default. Web pages shouldn't use accessible overlays, reduced-data modes, or other personal­izations if these features can be available all the time. Of course, some features conflict; you can't display a light and dark color scheme simultaneously. Personal­ization is a fallback strategy to resolve conflicting needs. Dispro­portionately under­represented needs deserve dispro­portionately greater attention, so they come before personal preferences instead of being relegated to a separate lane.
    +One of the core ideas behind the flavor of inclusive design I present is inclusivity by default. Web pages shouldn't use accessible overlays, reduced-data modes, or other personalizations if these features can be available all the time. Of course, some features conflict; you can't display a light and dark color scheme simultaneously. Personalization is a fallback strategy to resolve conflicting needs. Dis­proportionately underrepresented needs deserve disproportionately greater attention, so they come before personal preferences instead of being relegated to a separate lane.
     
     Another focus is minimalism. [Progressive enhancement](https://en.wikipedia.org/wiki/Progressive_enhancement) is a simple, safe idea that tries to incorporate some responsibility into the design process without rocking the boat too much. I don't find it radical enough. I call my alternative approach "restricted enhancement".
     
    @@ -60,7 +60,7 @@ Our goal: make a textual website maximally inclusive, using restricted enhanceme
     Security and privacy
     --------------------
     
    -One of the defining differences between textual websites and advanced Web 2.0 sites/apps is safety. Most browser vulnera­bilities are related to modern Web features like JavaScript and WebGL. The simplicity of basic textual websites should guarantee some extra safety; however, webmasters need to take additional measures to ensure limited use of "modern" risky features.
    +One of the defining differences between textual websites and advanced Web 2.0 sites/apps is safety. Most browser vulnerabilities are related to modern Web features like JavaScript and WebGL. The simplicity of basic textual websites should guarantee some extra safety; however, webmasters need to take additional measures to ensure limited use of "modern" risky features.
     
     ### TLS
     
    @@ -68,7 +68,7 @@ All of the simplicity in the world won't protect a page from unsafe content inje
     
     If your OpenSSL (or equivalent) version is outdated or you don't want to download and run a shell script, SSL Labs' [SSL Server Test](https://www.ssllabs.com/ssltest/) should be equivalent to testssl.sh. Mozilla's [HTTP Observatory](https://observatory.mozilla.org/) offers a subset of Webbkoll's features and is a bit out of date (and requires JavaScript), but it also gives a beginner-friendly score. Most sites should strive for at least a 50, but a score of 100 or even 120 shouldn't be too hard to reach.
     
    -A false sense of security is far worse than transparent insecurity. Don't offer broken TLS ciphers, including TLS 1.0 and 1.1. Vintage computers can run TLS 1.2 implemen­tations such as BearSSL surprisingly efficiently, leverage a TLS terminator, or they can use a plain unencrypted connection. A broken cipher suite is security theater.
    +A false sense of security is far worse than transparent insecurity. Don't offer broken TLS ciphers, including TLS 1.0 and 1.1. Vintage computers can run TLS 1.2 implementations such as BearSSL surprisingly efficiently, leverage a TLS terminator, or they can use a plain unencrypted connection. A broken cipher suite is security theater.
     
     ### Scripts and the Content Security Policy
     
    @@ -177,7 +177,7 @@ cache-control: max-age=31557600, immutable
     
     In addition to HTML, CSS is also a blocking resource. You could pre-load your CSS using a `link` header. Alternatively: if your compressed CSS is under a kilobyte consider inlining it in the `` using a `