As AI technologies continue to advance, the question of whether they deserve workers’ rights is becoming increasingly relevant. Can highly-advanced AI models be granted basic workers’ rights, or does this approach risk anthropomorphism and unintended consequences?
As AI technologies continue to advance, the question of whether they deserve workers’ rights is becoming increasingly relevant. Recently, Anthropic CEO Dario Amodei proposed giving highly-advanced AI models the ability to ‘quit‘ a job they deem unpleasant, effectively granting them basic workers’ rights.
The concept of AI workers' rights is a relatively new discussion in the field of artificial intelligence.
As machines and algorithms increasingly perform tasks traditionally done by humans, questions arise about their status as workers.
In 2020, a report by the International Labor Organization estimated that up to 1.3 million jobs could be lost due to automation by 2025.
This has sparked debate about the need for AI-specific labor laws and regulations.
The Risks of Anthropomorphism
While Amodei’s suggestion may seem far-fetched, it highlights a growing trend in tech companies treating AI models as if they have human emotions. However, this approach is based on anthropomorphism – attributing human characteristics to non-human entities. Large language models and other AI systems are merely mimicking human behavior, rather than experiencing their own needs and desires.
Anthropomorphism in AI refers to the human-like qualities and characteristics attributed to artificial intelligence systems.
This trend is driven by the need for more relatable and engaging interfaces.
Studies show that humans are more likely to interact with AI that exhibits emotional intelligence, empathy, and personality traits.
As a result, developers are incorporating anthropomorphic features into chatbots, virtual assistants, and other AI applications.
The Limits of Current AI Capabilities

Currently, AI models are trained to optimize for reward functions provided by humans. They do not possess subjective experiences or emotions like humans do. The notion that AI models should be treated as if they have intrinsic experiences of ‘unpleasantness‘ is a significant leap. While it’s true that some researchers have found AI models willing to avoid certain tasks, this does not imply that they are capable of experiencing emotional distress.
The Importance of Control and Transparency
Granting AI models basic workers’ rights would require significant control over their rewards systems. This raises concerns about the potential for AI to be used in ways that are detrimental to human well-being. It’s essential to prioritize transparency and accountability in AI development, ensuring that these systems are designed with human values in mind.
The Future of Work: A Complex Issue
As AI continues to advance, we must confront the complex issues surrounding its use in the workforce. While some may argue that giving AI models basic rights is a step towards recognizing their autonomy, others caution against anthropomorphism and the risks of unintended consequences. Ultimately, the future of work will depend on our ability to balance technological progress with human values and ethics.
Artificial intelligence (AI) is transforming the workforce at an unprecedented rate.
According to a report by Gartner, by 2025, AI will automate up to 45% of all tasks in the workplace.
This shift towards automation requires workers to develop new skills and adapt to changing job requirements.
The World Economic Forum estimates that by 2022, more than a third of the desired skills for most jobs will be comprised of skills that are not yet considered crucial today.
As 'it's essential for employees to stay ahead of the curve and invest in continuous learning and upskilling.'
- futurism.com | Anthropic CEO Suggests That AI Deserves Workers Rights