There is no “data ownership”. It’s all made up. If you don’t want people to copy and build off your ideas, don’t share them. That’s not to defend corpos Btw. I posit that any ai models trained on public data must be open sourced by default.
What did the user agreement say? Also just out of curiosity do you remember all those privacy nuts back in the day who warned us all about the dangers of closed source software?
Your heart rate. Your step count. Your location. Your searches. Your browser history. Your call history. Your contacts. Your transactions. Your credit history. Your medical history. This is data that you didn’t choose to create or share, but that you exhaust in the day-to-day things you do.
Surveillance capitalism has grown too unfathomably huge and ingrained to choose not to share this data; that would be akin to checking out of modern life wholesale in a lot of ways. Guarding this data takes not only the realisation that it needs guarding, but changing law and culture such that the parties that have to have all that data to provide you with services cannot take it from you to sell.
AI is here and it’s here to stay whether we want it or not, either it’s free and legal for everyone to develop (ie training on copyrighted data does not violate copyrights), or only the massively rich corporations will be able to afford to pay for (or already happen to have the rights to as the case may be, see stock photo companies or reddit for examples) the sheer amounts of data that are needed to adequately train them
There is no “data ownership”. It’s all made up. If you don’t want people to copy and build off your ideas, don’t share them. That’s not to defend corpos Btw. I posit that any ai models trained on public data must be open sourced by default.
Autodesk has mandatory cloud saves, and MS got caught training on private github repos. They don’t care whether it’s public or not
What did the user agreement say? Also just out of curiosity do you remember all those privacy nuts back in the day who warned us all about the dangers of closed source software?
“Private github repos” arent really private. You have to selfhost for that.
Your heart rate. Your step count. Your location. Your searches. Your browser history. Your call history. Your contacts. Your transactions. Your credit history. Your medical history. This is data that you didn’t choose to create or share, but that you exhaust in the day-to-day things you do.
Surveillance capitalism has grown too unfathomably huge and ingrained to choose not to share this data; that would be akin to checking out of modern life wholesale in a lot of ways. Guarding this data takes not only the realisation that it needs guarding, but changing law and culture such that the parties that have to have all that data to provide you with services cannot take it from you to sell.
There’s a difference between private data and content. Obviously this is not what we’re talking about here
You were talking about data ownership, not intellectual property.
Context matters
AI is here and it’s here to stay whether we want it or not, either it’s free and legal for everyone to develop (ie training on copyrighted data does not violate copyrights), or only the massively rich corporations will be able to afford to pay for (or already happen to have the rights to as the case may be, see stock photo companies or reddit for examples) the sheer amounts of data that are needed to adequately train them
Amen!