1
0
Fork 0
mirror of https://git.sr.ht/~seirdy/seirdy.one synced 2024-12-17 22:32:10 +00:00
seirdy.one/content/notes/cnet-didnt-have-to-delete-old-articles.md
2023-08-15 09:27:46 -07:00

19 lines
1.4 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: "CNET didnt have to delete old articles"
date: 2023-08-15T09:22:31-07:00
replyURI: "https://gizmodo.com/cnet-deletes-thousands-old-articles-google-search-seo-1850721475"
replyTitle: "CNET Deletes Thousands of Old Articles to Game Google Search"
replyType: "NewsArticle"
replyAuthor: "Gizmodo"
replyAuthorType: "NewsMediaOrganization"
replyAuthorURI: "https://gizmodo.com/"
syndicatedCopies:
- title: 'The Fediverse'
url: 'https://pleroma.envs.net/notice/AYlCIhRRPwoFEDB0dM'
- title: 'The Mojeek Discourse'
url: 'https://community.mojeek.com/t/cnet-didn-t-have-to-delete-old-articles/703'
---
CNET actually didn't have to delete old articles to improve ranking. If CNET simply removed those articles from its sitemap, used [WebSub](https://www.w3.org/TR/websub/) to inform Google (and IndexNow to inform Bing, Seznam, and Yandex) of new higher-priority pages, and maybe used `robots.txt` to disallow crawling of stale pages: CNET could keep old content but prioritize the crawling of recent content. Nothing I just described is Google-specific; these are all agreed-upon standards that work across several search engines.
I suppose it's easier to just delete pages, though. Less labor means fewer expenses. After all, this is the outlet that [cut costs with algorithmically-generated articles](https://gizmodo.com/cnet-ai-chatgpt-tech-news-1850017739).