mirror of
https://git.sr.ht/~seirdy/seirdy.one
synced 2024-11-23 21:02:09 +00:00
New note: "answer engines"
This commit is contained in:
parent
b6cec919ab
commit
10fb1f9e4a
2 changed files with 22 additions and 1 deletions
21
content/notes/answer-engines.md
Normal file
21
content/notes/answer-engines.md
Normal file
|
@ -0,0 +1,21 @@
|
|||
---
|
||||
title: "Answer engines"
|
||||
date: 2022-05-25T19:59:08+00:00
|
||||
---
|
||||
<p role="note">
|
||||
Reply to {{< mention-work itemprop="about" itemtype="BlogPosting" reply=true >}}{{<cited-work name="Is DuckDuckGo, DuckDuckDone?" extraName="headline" url="https://kevq.uk/is-duckduckgo-duckduckdone/">}} by {{<indieweb-person first-name="Kev" last-name="Quirk" url="https://kevq.uk/about/" itemprop="author">}}{{</mention-work>}}
|
||||
</p>
|
||||
|
||||
I read your article and share similar concerns. Using Microsoft Bing and Google Search's commercial APIs generally requires accepting some harsh terms, including a ban on mixing <abbr title="Search Engine Result Pages">SERPs</abbr> from multiple sources (this is why Ixquick shut down and the company pivoted to the Google-exclusive Startpage search service). But the requirement to allow trackers in a companion web browser was new to me.
|
||||
|
||||
Most of these agreements are confidential, so users don't really get transparency. On rare occasions, certain engines have successfully negotiated exceptions to result-mixing, but we don't know what other terms are involved in these agreements.
|
||||
|
||||
I've catalogued some other engines in my post {{<mention-work itemprop="citation" itemtype="BlogPosting">}}{{<cited-work name="A look at search engines with their own indexes" url="https://seirdy.one/2021/03/10/search-engines-with-own-indexes.html" extraName="headline">}}{{</mention-work>}}, and there are many alternatives that don't have this conflict of interest.
|
||||
|
||||
Most of these are not as good as Google/Bing when it comes to finding specific pieces of information, but many are far better when it comes to website discovery under a particular topic. Mainstream engines always seem to serve up webpages carefully designed to answer a specific question when I'm really just trying to learn about a larger topic. When using an engine like Marginalia or Alexandria, I can find "webpages about a topic" rather than "webpages designed to show up for a particular query".
|
||||
|
||||
One example: I was using Ansible at work just before my lunch break and I wanted to find examples of idempotent Ansible playbooks. Searching for "Ansible idempotent" on mainstream search engines shows blog posts and forums trying to answer the question "how to make playbooks idempotent". Searching on Alexandria and source code forges turns up actual examples of playbooks and snippets that feature idempotency.
|
||||
|
||||
SEO is a major culprit, but it's not the only one. Forums posters are often just trying to get a question answered, but search engines rank them well because they are optimized to find answers rather than find general resources.
|
||||
|
||||
In short: DuckDuckGo and other Google/Bing/Yandex competitors are tools for answering questions, not tools to learn about something. I've tried to reduce my reliance on them.
|
|
@ -2,6 +2,6 @@
|
|||
{{- with .Get "itemprop" -}}
|
||||
{{ $itemprop = . }}
|
||||
{{- end -}}
|
||||
<span class="h-cite" itemprop="{{ $itemprop }}"{{ with .Get "role" }} role="{{ . }}"{{ end }} itemscope itemtype="https://schema.org/{{ .Get "itemtype" }}">
|
||||
<span class="h-cite{{ with .Get "reply" }} in-reply-to{{ end }}" itemprop="{{ $itemprop }}"{{ with .Get "role" }} role="{{ . }}"{{ end }} itemscope itemtype="https://schema.org/{{ .Get "itemtype" }}">
|
||||
{{- .Inner | markdownify | safeHTML -}}
|
||||
</span>
|
||||
|
|
Loading…
Reference in a new issue