diff --git a/content/notes/cnet-didnt-have-to-delete-old-articles.md b/content/notes/cnet-didnt-have-to-delete-old-articles.md index 37a7ef4..8bd2f32 100644 --- a/content/notes/cnet-didnt-have-to-delete-old-articles.md +++ b/content/notes/cnet-didnt-have-to-delete-old-articles.md @@ -7,6 +7,11 @@ replyType: "NewsArticle" replyAuthor: "Gizmodo" replyAuthorType: "NewsMediaOrganization" replyAuthorURI: "https://gizmodo.com/" +syndicatedCopies: + - title: 'The Fediverse' + url: 'https://pleroma.envs.net/notice/AYlCIhRRPwoFEDB0dM' + - title: 'The Mojeek Discourse' + url: 'https://community.mojeek.com/t/cnet-didn-t-have-to-delete-old-articles/703' --- CNET actually didn't have to delete old articles to improve ranking. If CNET simply removed those articles from its sitemap, used [WebSub](https://www.w3.org/TR/websub/) to inform Google (and IndexNow to inform Bing, Seznam, and Yandex) of new higher-priority pages, and maybe used `robots.txt` to disallow crawling of stale pages: CNET could keep old content but prioritize the crawling of recent content. Nothing I just described is Google-specific; these are all agreed-upon standards that work across several search engines.