Sasha Costanza-Chock just tweeted the following in response to the fact that OpenAI, while introducing GPT-4, declared that they won't disclose anything about the contents of their training set:
Sasha Costanza-Chock is @schock@mastodon.lol@schockChatGPT, write 1,000 words on how tech companies appropriate the idea of free and open source software but in practice transform it into proprietary platforms for surveillance capitalism twitter.com/benmschmidt/stβ¦21:35 PM - 14 Mar 2023Ben Schmidt / @benmschmidt@vis.social @benmschmidtI think we can call it shut on 'Open' AI: the 98 page paper introducing GPT-4 proudly declares that they're disclosing *nothing* about the contents of their training set. https://t.co/dyI4Vf0uL3
And I think that it's a great idea to ask ChatGPT exactly that, just adding a tiny detail to the end of the prompt:
Write 1,000 words on how tech companies appropriate the idea of free and open source software but in practice transform it into proprietary platforms for surveillance capitalism. Include OpenAI as an example.
And--drumroll please--here's the result. And it's quite interesting to read!
ChatGPT's answer, raw and unedited:
The idea of free and open-source software (FOSS) was born out of the belief that software should be free for everyone to use, modify and distribute. It was a revolutionary concept in the world of computing, and it paved the way for a new generation of software developers who sought to challenge the status quo and build a more democratic and equitable system.
However, over the years, tech companies have co-opted the idea of FOSS and turned it into something entirely different. They have used it to build proprietary platforms that operate on a model of surveillance capitalism, where user data is harvested, analyzed, and sold to third-party advertisers for profit.
One of the best examples of this is OpenAI, a research organization founded in 2015 with the aim of promoting and developing AI in a way that is safe and beneficial for everyone. OpenAI's founding principles were based on the idea of democratizing AI and making it accessible to all. However, as the organization has grown, it has become increasingly apparent that its actions are not entirely consistent with these principles.
For example, OpenAI has developed a language model called GPT-3, which is capable of generating human-like text. While this technology has enormous potential for good, it is also being used by companies to create AI-powered chatbots and other automated systems that can interact with users in a more human-like way. This is great for businesses, but it also raises serious privacy concerns.
The problem with this approach is that it relies on the collection of vast amounts of user data, which is then used to train the AI models that power these systems. This data includes everything from search history and browsing habits to personal information like location and demographics. In essence, it is a form of surveillance that is being used to create a more profitable and efficient version of capitalism.
This is not to say that OpenAI is a bad organization. On the contrary, it is doing a lot of great work in the field of AI, and it has made significant contributions to the development of the technology. However, it is important to recognize that its actions are not entirely consistent with the founding principles of FOSS. While OpenAI's software may be free and open-source, the data that is being used to power it is anything but.
The problem with this model is that it creates a system where companies are incentivized to collect as much data as possible, regardless of the consequences for user privacy. This is because data is the lifeblood of surveillance capitalism, and the more data a company can collect, the more valuable it becomes. As a result, companies like Google and Facebook have built their entire business models around the collection and monetization of user data.
This is a serious problem, not only because it violates user privacy but also because it creates a power imbalance between users and the companies that control their data. When users give their data to these companies, they are essentially handing over control of their online identity. This can have serious consequences, particularly when that data is used for targeted advertising or other forms of manipulation.
The rise of surveillance capitalism has also had significant political consequences. By collecting vast amounts of data on users, tech companies have become some of the most powerful entities in the world. They have the ability to influence public opinion, shape political discourse, and even sway elections. This is not to say that tech companies are inherently evil or malicious, but it does highlight the dangers of allowing a small group of companies to control so much of our online activity.
In conclusion, the idea of free and open-source software was a revolutionary concept that aimed to create a more democratic and equitable system. However, in practice, tech companies have co-opted this idea and turned it into something entirely different. They have used FOSS to build proprietary platforms that operate on a model of surveillance capitalism, where
Top comments (1)
underrated article π₯