Elon Musk’s Department of Government Efficiency (DOGE) is pushing for rapid development of a custom AI-powered conversational interface, GSAi, to enhance the day-to-day productivity of the US General Services Administration’s employees and analyze vast amounts of contract and procurement data.
The Department of Government Efficiency (DOGE), led by Elon Musk, is working on a custom generative AI chatbot called GSAi for the US General Services Administration. This project is part of President ‘AI-first agenda’ to modernize the federal government with advanced technology.
The primary goal of the GSAi chatbot project is to enhance the day-to-day productivity of the GSA’s approximately 12,000 employees, who are responsible for managing office buildings, contracts, and IT infrastructure across the federal government. Additionally, Musk’s team hopes to use the chatbot and other AI tools to analyze vast amounts of contract and procurement data.
The decision to develop a custom chatbot follows discussions between the GSA and Google about its Gemini offering. However, the agency ultimately determined that it wouldn’t provide the level of data DOGE desired. This move is part of the Trump administration’s efforts to reduce costs and modernize the US government.
The White House has not commented on the GSAi chatbot project. However, federal regulations require avoiding even the appearance of a conflict of interest in the choice of suppliers. In this case, DOGE is pushing to install Microsoft‘s GitHub Copilot, despite Anysphere‘s Cursor being initially approved by the IT team at the agency.
Federal regulations require conducting preliminary security reviews before deploying new tools. This process can be time-consuming, which may have contributed to Cursor’s inability to win business from DOGE. The government’s interest in AI is not new, with President ‘ordered the General Services Administration to prioritize security reviews for several categories of AI tools, including chatbots and coding assistants’.

Artificial intelligence (AI) security refers to the measures taken to protect AI systems from various threats, including data breaches, cyber attacks, and unauthorized access.
As AI becomes increasingly integrated into our daily lives, the risk of AI-related security vulnerabilities grows.
According to a report by Cybersecurity Ventures, the global AI security market is projected to reach $8.4 billion by 2025.
Key areas of concern include model poisoning, where attackers manipulate AI training data to produce biased or incorrect results, and adversarial attacks, which exploit weaknesses in AI decision-making processes.
The federal government‘s use of AI is becoming increasingly common, with individual agencies exploring licensing AI software. In transparency reports published during Biden‘s term in office, several departments reported they were pursuing the use of AI coding tools. The GSA itself had been exploring three limited-purpose chatbots, including for handling IT service requests.
According to a report by Gartner, AI adoption is expected to reach $190 billion by 2025.
Organizations are increasingly investing in AI technologies to improve operational efficiency and enhance customer experiences.
The use of cloud-based platforms has simplified the deployment of AI solutions, reducing the complexity and cost associated with traditional on-premise installations.
Furthermore, advancements in machine learning algorithms have improved accuracy and reduced the need for extensive data labeling.
The Trump administration’s approach to adopting emerging technologies has been criticized by federal employees, labor unions, Democrats in Congress, and civil society groups, who argue that it may be unconstitutional. The decision to develop a custom chatbot raises concerns about the potential risks associated with introducing security vulnerabilities, costly errors, or malicious code.
Artificial intelligence (AI) risks refer to potential hazards associated with the development and deployment of intelligent systems.
These risks can be broadly categorized into two types: technical risks and societal risks.
Technical risks include issues such as bias, 'error propagation' , and system instability, while societal risks encompass concerns like job displacement, privacy invasion, and autonomous decision-making.
According to a study by the MIT-IBM Watson AI Lab , 71% of executives believe that AI will displace more jobs than it creates.
As AI continues to advance, understanding these risks is crucial for mitigating their impact and ensuring responsible development.
The GSAi chatbot project is part of the Trump administration’s efforts to modernize the federal government with advanced technology. While the project aims to enhance productivity and reduce costs, it also raises concerns about conflicts of interest, security concerns, and the potential risks associated with adopting new technologies.