mirror of
https://git.sr.ht/~seirdy/seirdy.one
synced 2024-11-23 21:02:09 +00:00
New note: CNET deleting old articles
This commit is contained in:
parent
fbd7df2e55
commit
1163e0d311
1 changed files with 14 additions and 0 deletions
14
content/notes/cnet-didnt-have-to-delete-old-articles.md
Normal file
14
content/notes/cnet-didnt-have-to-delete-old-articles.md
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
---
|
||||||
|
title: "CNET didn’t have to delete old articles"
|
||||||
|
date: 2023-08-15T09:22:31-07:00
|
||||||
|
replyURI: "https://gizmodo.com/cnet-deletes-thousands-old-articles-google-search-seo-1850721475"
|
||||||
|
replyTitle: "CNET Deletes Thousands of Old Articles to Game Google Search"
|
||||||
|
replyType: "NewsArticle"
|
||||||
|
replyAuthor: "Gizmodo"
|
||||||
|
replyAuthorType: "NewsMediaOrganization"
|
||||||
|
replyAuthorURI: "https://gizmodo.com/"
|
||||||
|
---
|
||||||
|
CNET actually didn't have to delete old articles to improve ranking. If CNET simply removed those articles from its sitemap, used [WebSub](https://www.w3.org/TR/websub/) to inform Google (and IndexNow to inform Bing, Seznam, and Yandex) of new higher-priority pages, and maybe used `robots.txt` to disallow crawling of stale pages: CNET could keep old content but prioritize the crawling of recent content. Nothing I just described is Google-specific; these are all agreed-upon standards that work across several search engines.
|
||||||
|
|
||||||
|
I suppose it's easier to just delete pages, though. Less labor means fewer expenses. After all, this is the outlet that [cut costs with algorithmically-generated articles](https://gizmodo.com/cnet-ai-chatgpt-tech-news-1850017739).
|
||||||
|
|
Loading…
Reference in a new issue