1
0
Fork 0
mirror of https://git.sr.ht/~seirdy/seirdy.one synced 2024-09-20 12:12:09 +00:00
seirdy.one/content/notes/chatgpt-considered-harmful.md

22 lines
2.1 KiB
Markdown
Raw Normal View History

---
title: "ChatGPT reliance considered harmful"
date: 2023-02-18T22:28:09-08:00
2023-02-19 06:31:19 +00:00
syndicatedCopies:
- title: 'The Fediverse'
url: 'https://pleroma.envs.net/notice/ASpSGG1Pn2hTBd7dz6'
- title: 'jstpst'
url: 'https://www.jstpst.net/f/just_post/7820/chatgpt-reliance-considered-harmful'
2023-02-22 00:20:25 +00:00
- title: 'The Mojeek Discourse'
url: 'https://community.mojeek.com/t/chatgpt-reliance-considered-harmful/535'
---
Designing tools to make people feel convenienced (the opposite of inconvenienced) is sometimes different from designing tools to make people's lives better.
2023-02-19 06:44:35 +00:00
ChatGPT is _very_ useful for tasks such as re-phrasing ideas and coherently assembling scattered thoughts, assuming that's the sort of thing you're willing to outsource. But current language models are detrimental for ideation: they're designed to generate the least interesting, most obvious response to a prompt. That's not my opinion; the GPT family of language models analyzes patterns in language, and generates predictable outputs. It's often "correct" when a correct answer is the most predictable, least noteworthy response. Unfortunately, it's often convincingly incorrect: [it even defends wrong answers](https://simonwillison.net/2023/Feb/15/bing/#prompt-leaked).
Use it to think, and your ideas will be disposable. Don't live your life by a model with billions of parameters optimized to help you be as predictable as possible. It's the equivalent of sending your thoughts through a smoothing algorithm that treats interesting ideas as anomalies.
People have worried about certain trends in investor-backed technology helping normalize helplessness to [create captive consumers ripe for domestication](https://seirdy.one/posts/2021/01/27/whatsapp-and-the-domestication-of-users/). This is the logical conclusion of that process. Language models are often developed by and for companies who sell access to [the means to change preferences at scale](https://en.wikipedia.org/wiki/Online_advertising). What happens when people ask such a language model for _its_ preferences?
Use ChatGPT as a cop-out. Sometimes, _it's okay to cop out._ But don't cop-out of what you love to do, or want to do well.