Patented.ai Releases LLM Shield to Protect Sensitive Information
With the rise of artificial intelligence (AI) and chatbots, companies are concerned about the risks of employees sharing sensitive or proprietary information with them. San Francisco-based startup Patented.ai has developed a solution to address these concerns: the LLM Shield plug-in. This article will explore what LLM Shield is, how it works, and its potential benefits for companies.
What is LLM Shield?
LLM Shield is a plug-in developed by Patented.ai that warns employees when they’re about to share sensitive or proprietary information with an AI chatbot like OpenAI’s ChatGPT or Google’s Bard. When employees enter company data into a chatbot, the data can then be used to train the large language model (LLM) that powers the chatbot. Patented.ai’s LLM Shield is powered by an AI model that can recognize all types of sensitive data, from trade secrets and personally identifiable information to HIPPA-protected health data and military secrets.
How Does LLM Shield Work?
LLM Shield is designed to integrate with chatbots and can run inside a chatbot window. When an employee enters sensitive information into a chatbot’s text window, LLM Shield shows an alert to the employee. The plug-in uses an AI model to identify and recognize the sensitivity of the data being entered into the chatbot, and can alert employees before they even press the send button.
The Benefits of LLM Shield
LLM Shield provides companies with increased visibility and control over their employees’ use of LLMs. With the plug-in, companies can have more confidence in their employees and their use of chatbots. The AI model that powers LLM Shield can recognize all kinds of sensitive data, from trade secrets and personally identifiable information to HIPPA-protected health data and military secrets. The plug-in is designed to protect the intellectual property (IP) of companies across a variety of industries.
Demand for LLM Shield
Recent stories of leaks at companies like Samsung have prompted increased demand for products like LLM Shield. Patented.ai has accelerated the development of LLM Shield and plans to roll out new types of security products throughout the year. A Patented.ai spokesperson also said that the AI model that powers LLM Shield will be used to develop other products to protect the IP of companies.
Concerns Over AI Models
While generative AI models like ChatGPT and Bard can be useful and time-saving for businesses, many companies are concerned about how sensitive or private information could be used by these models. The inner workings of these models are not yet fully understood, and there is a pressing need for AI trust, risk, and security-management tools to manage data and process flows between users and companies that host generative-AI foundation models.
Companies that Have Banned ChatGPT
Several large companies, including Bank of America, Goldman Sachs, Citigroup Inc., Deutsche Bank AG, and Wells Fargo, have banned ChatGPT to minimize the risk of leaks. These companies have recognized the potential dangers of using generative AI models and are taking proactive steps to protect their IP.
Patented.ai’s LLM Shield plug-in provides companies with an added layer of security when using AI chatbots. With the ability to recognize sensitive data and alert employees before they share it, LLM Shield can help companies protect their IP and maintain their competitive edge. As demand for AI trust, risk, and security-management tools continues to grow, products like LLM Shield will become increasingly important in ensuring the safe and responsible use of AI.