2.1 KiB
title | date | syndicatedCopies | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ChatGPT reliance considered harmful | 2023-02-18T22:28:09-08:00 |
|
Designing tools to make people feel convenienced (the opposite of inconvenienced) is sometimes different from designing tools to make people's lives better.
ChatGPT is very useful for tasks such as re-phrasing ideas and coherently assembling scattered thoughts, assuming that's the sort of thing you're willing to outsource. But current language models are detrimental for ideation: they're designed to generate the least interesting, most obvious response to a prompt. That's not my opinion; the GPT family of language models analyzes patterns in language, and generates predictable outputs. It's often "correct" when a correct answer is the most predictable, least noteworthy response. Unfortunately, it's often convincingly incorrect: it even defends wrong answers.
Use it to think, and your ideas will be disposable. Don't live your life by a model with billions of parameters optimized to help you be as predictable as possible. It's the equivalent of sending your thoughts through a smoothing algorithm that treats interesting ideas as anomalies.
People have worried about certain trends in investor-backed technology helping normalize helplessness to create captive consumers ripe for domestication. This is the logical conclusion of that process. Language models are often developed by and for companies who sell access to the means to change preferences at scale. What happens when people ask such a language model for its preferences?
Use ChatGPT as a cop-out. Sometimes, it's okay to cop out. But don't cop-out of what you love to do, or want to do well.