OpenAI’s long-awaited agentic AI model, Operator, requires verification before performing critical tasks, highlighting the tension between power and safety in AI development.
Tension Between Power and Safety
Operator, OpenAI‘s long-awaited agentic AI model, is designed to work on your behalf, following the instructions it’s given like your own little employee. However, before it can perform any significant action, such as submitting an order or sending an email, Operator will need to ask for approval. This measure highlights the tension between keeping stringent guardrails on AI models while allowing them to exercise their powerful capabilities.
The Limits of Operator
A limited preview of Operator is only available to subscribers of the ChatGPT Pro plan, which costs $200 per month. The agentic tool uses its own AI model called Computer-Using Agent to interact with its virtual environment by constantly taking screenshots of your desktop. These screenshots are interpreted by GPT-4o’s image processing capabilities, allowing Operator to use any software it’s looking at.
However, in practice, the experience is not seamless. When the AI gets stuck, as it still often does, it hands control back to the user to remedy the issue. It will also stop working to ask you for your usernames and passwords, entering a “takeover mode.” Users have reported that Operator is “simply too slow” and can be frustrating to use.
Safety Concerns
While safety measures are good, they come at the cost of reliability. If Operator can’t be trusted to work without neutering it, how useful will this tech be? Additionally, if safety and privacy are important to you, then you should already be uneasy with the idea of letting an AI model run rampant on your machine, especially one that relies on constantly screenshotting your desktop.
OpenAI says that it will store your chats and screenshots up to 90 days on its servers, even if you delete them. This raises concerns about data privacy and security. Furthermore, Operator’s ability to browse the web means it will potentially be exposed to all kinds of danger, including attacks called prompt injections that could trick the model into defying its original instructions.
The Future of AI
As AI technology continues to advance, we are faced with the challenge of balancing power and safety. Can we create AI models that can work independently without compromising our safety and security? Only time will tell. For now, Operator is a step in the right direction, but it’s clear that there is still much work to be done.