OpenAI has unveiled a pair of fresh safeguards for ChatGPT designed to counter rising dangers to its artificial intelligence framework, based on details from a new company announcement.

As artificial intelligence platforms expand their links to extensive online areas and third-party software, the potential for prompt injection exploits grows. These exploits occur when individuals create misleading inputs to manipulate large language models into carrying out harmful directives or disclosing confidential details.

A key addition to ChatGPT is Lockdown Mode, a selectable protection option tailored for those needing robust data safeguards. It curbs the chatbot's connections to outside resources by shutting down various capabilities and restricting internet access to pre-stored materials rather than real-time fetches. The feature launches initially for corporate subscribers before reaching individual users in the near future.

Simultaneously, more explicit danger indicators are being implemented, using a consistent "Elevated Risk" marker for elements that heighten vulnerability, such as those granting AI systems online connectivity. These markers will show up in ChatGPT, ChatGPT Atlas, and Codex.

The content debuted in our associated outlet PC för Alla and has been adjusted and rendered into English from the original Swedish text.

Viktor produces articles and updates for our connected platforms M3 and PC för Alla. He holds a strong interest in technological advancements and keeps pace with recent gadget introductions and prominent issues in the everyday electronics sector.