mirror of
https://git.sr.ht/~seirdy/seirdy.one
synced 2024-11-27 14:12:09 +00:00
syndicate
This commit is contained in:
parent
2a8d60b896
commit
e4592387a3
1 changed files with 3 additions and 3 deletions
|
@ -6,9 +6,9 @@ replyTitle: "“the secret list of websites”"
|
||||||
replyType: "BlogPosting"
|
replyType: "BlogPosting"
|
||||||
replyAuthor: "Chris Coyier"
|
replyAuthor: "Chris Coyier"
|
||||||
replyAuthorURI: "https://chriscoyier.net/"
|
replyAuthorURI: "https://chriscoyier.net/"
|
||||||
#syndicatedCopies:
|
syndicatedCopies:
|
||||||
# - title: 'The Fediverse'
|
- title: 'The Fediverse'
|
||||||
# url: ''
|
url: 'https://pleroma.envs.net/notice/AUttq9kpOmeYZDHRTc'
|
||||||
---
|
---
|
||||||
I added an entry to [my robots.txt](https://seirdy.one/robots.txt) to block ChatGPT's crawler, but blocking crawling isn't the same as blocking indexing; it looks like Google chose to use the [Common Crawl](https://commoncrawl.org/) for this and sidestep the need to do crawling of its own. That's a strange decision; after all, Google has a much larger proprietary index at its disposal.
|
I added an entry to [my robots.txt](https://seirdy.one/robots.txt) to block ChatGPT's crawler, but blocking crawling isn't the same as blocking indexing; it looks like Google chose to use the [Common Crawl](https://commoncrawl.org/) for this and sidestep the need to do crawling of its own. That's a strange decision; after all, Google has a much larger proprietary index at its disposal.
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue