1.9 KiB
title | date | replyURI | replyTitle | replyType | replyAuthor | replyAuthorURI | syndicatedCopies | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
“Open Artificial Intelligence” misses the point | 2023-07-31T22:12:48-07:00 | https://blog.opensource.org/towards-a-definition-of-open-artificial-intelligence-first-meeting-recap/ | Towards a definition of “Open Artificial Intelligence”: First meeting recap | BlogPosting | Stefano Maffulli | https://www.maffulli.net/ |
|
The Open-Source Initiative (OSI) is planning to form a definition of "Open Artificial Intelligence" (not to be confused with OpenAI, a company selling proprietary autocomplete software whose technical details only grow less open with each iteration). Unfortunately, odds of the definition requiring the release of training data are slim: the OSI's executive director isn't keen on the idea himself.
I see libre/open-source software as a means to reduce dependence on a vendor, and mitigate the risk of [user domestication]({{<relref "/posts/whatsapp-and-the-domestication-of-users.md">}}). As long as training data is out of the community's reach, it's impossible for the vendor to be replaced. Yes, it's possible to customize or re-train the model, but the vendor remains in control of its future development.
Recent decades have tested the effectiveness of liberating source code as a defense against user domestication, [as I explain in another blog post]({{<relref "/posts/keeping-platforms-open.md">}}). But to re-define Open Source to allow labelling a model that is impossible to competitively fork would be to miss the whole value of FOSS in my eyes: to allow users to own not just their tools, but those tools' futures.