ChatGPT Plugins Pose Security Risks

ChatGPT plugins pose security risks that could provide loopholes for threat actors to seize control of an organization’s account on third-party platforms and access Personal Identifiable Information (PII).

ChatGPT plugins enable users to access up-to-date information (rather than the relatively old data the chatbot was trained on), as well as to integrate ChatGPT with third-party services. For instance, plugins can allow users to interact with their GitHub and Google Drive accounts. This could mean granting the plugin access to the user’s account on the integrated service, posing potential security risks.

OpenAI has since also introduced GPTs, which are bespoke versions of ChatGPT tailored for specific use cases, while reducing third-party service dependencies. As of March 19, 2024, ChatGPT users will no longer be able to install new plugins or create new conversations with existing plugins.

API security firm Salt Security has released a new study of ChatGPT plugins that identifies three types of vulnerabilities within the plugins. Vulnerabilities were first discovered within the plugin installation process itself, allowing attackers to install malicious plugins and potentially intercept user messages containing proprietary information.

Secondly, flaws were found within PluginLab, a framework for developing ChatGPT plugins, which could lead to account takeovers on third-party platforms such as GitHub.

Lastly, OAuth redirection manipulation vulnerabilities were identified in several plugins, enabling attackers to steal user credentials and take over accounts.

At the time of this research ChatGPT plugins represented the primary means of adding functionality and features to the LLM. In November, OpenAI announced that paying customers would be able to create their own GPTs that can be customized for specific topics or tasks. These GPTs are expected to replace plugins. 

ChatGPT can gather personally identifiable information (PII) from interactions for response generation. OpenAI’s privacy policy specifies that this encompasses user account specifics, conversation content, and web interaction data. If unauthorized individuals were to gain access, this information could be exposed, potentially compromising user data, training prediction data, and details regarding the model’s architecture and parameters.

In March 2023 ChatGPT experienced a disruption caused by a bug in the Redis open-source library, exposing sensitive user information, including chat titles, chat history, and payment details.

The following month OpenAI announced that it would be partnering with bug bounty platform Bugcrowd to launch a bug bounty program. The program is, according to OpenAI, part of its “commitment to secure AI” and to “recognize and reward the valuable insights of security researchers who contribute to keeping our technology and company secure”.

Through the bug bounty program, individuals will be able to report any security flaws, vulnerabilities or bugs found within its systems for a monetary reward. The monetary rewards range from $200 for “low-severity” findings to $20,000 for “exceptional discoveries.” Google also launched an AI bug bounty program in fall 2023.

In a blog post published Oct. 26, Google noted that generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation, or misinterpretations of data.