OpenAI Policy Change: No More Training on Customer Data
OpenAI, the leading artificial intelligence (AI) research institute, has recently announced a change in its policy regarding customer data. CEO Sam Altman confirmed to CNBC on Friday that OpenAI would no longer train its AI large-language models such as GPT with paying customer data, as many customers do not want their data to be used for this purpose.
The Change in OpenAI Policy
Altman said, “Customers clearly want us not to train on their data, so we’ve changed our plans: We will not do that.” According to records from the Internet Archive’s Wayback Machine, OpenAI’s terms of service were quietly updated on March 1. Altman also confirmed that OpenAI has not trained on any API data for a while. APIs, or application programming interfaces, are frameworks that allow customers to plug directly into OpenAI’s software.
OpenAI’s business customers, which include Microsoft, Salesforce, and Snapchat, are more likely to take advantage of OpenAI’s API capabilities. However, OpenAI’s new privacy and data protection extends only to customers who use the company’s API services. The company’s updated Terms of Use note that they may use content from services other than their API, which could include text that employees enter into the wildly popular chatbot ChatGPT. Amazon reportedly recently warned employees not to share confidential information with ChatGPT for fear that it might show up in answers.
The Impact of Large-Language Models
The change in OpenAI’s policy comes as industries grapple with the prospect of large-language models replacing material that humans create. The Writers Guild of America, for example, began striking Tuesday after negotiations between the Guild and movie studios broke down. The Guild had been pushing for limitations on the use of OpenAI’s ChatGPT for script generating or rewriting.
Executives are equally concerned about the impact of ChatGPT and similar programs on their intellectual property. Entertainment mogul and IAC chairman Barry Diller has suggested that media companies could take their issues to the courts and potentially sue AI companies over the use of the creative content.
Conclusion
OpenAI’s decision to stop training on customer data is a positive move towards protecting the privacy and data of its customers. However, it remains to be seen how this decision will affect the development of large-language models and their impact on various industries. As technology continues to advance, it is crucial for companies to prioritize ethical considerations and data privacy while also continuing to push the boundaries of AI research.
FAQs
- Why did OpenAI change its policy regarding customer data?
- OpenAI changed its policy because many customers do not want their data to be used for training AI large-language models.
- Which companies are OpenAI’s business customers?
- OpenAI’s business customers include Microsoft, Salesforce, and Snapchat.
- What is the impact of large-language models on industries?
- Large-language models could potentially replace material that humans create, leading to concerns about intellectual property and creativity.
- Why did the Writers Guild of America begin striking?
- The Writers Guild of America began striking after negotiations between the Guild and movie studios broke down over limitations on the use of OpenAI’s ChatGPT for script generating or rewriting.
- What should companies prioritize when developing AI technologies?
- Companies should prioritize ethical considerations and data privacy while also continuing to push the boundaries of AI research.