Global divisions on AI governance and energy consumption dominated the Paris AI Summit, highlighting differing national outlooks and concerns over global regulation.
The third annual Artificial Intelligence Action Summit in Paris highlighted the growing disunity among global leaders on how to approach AI. The summit, attended by prominent figures such as Emmanuel Macron, Narendra Modi, and JD Vance, showcased the challenges of achieving consensus on AI governance.
Effective AI governance involves creating frameworks that balance technological advancements with societal values.
This includes establishing clear guidelines for data collection and usage, ensuring transparency in decision-making processes, and implementing mechanisms for accountability and oversight.
Organizations are developing governance models that incorporate human judgment and ethics to mitigate potential biases and risks associated with AI systems.
In a speech that symbolized the fracturing consensus on AI, US Vice-President JD Vance emphasized the need for international regulatory regimes that foster innovation rather than stifle it. He warned against cooperating with ‘authoritarian’ regimes, a clear reference to China’s involvement in AI development. The US declined to sign the diplomatic declaration on ‘inclusive and sustainable‘ AI, citing concerns over global governance and national security.
JD Vance is an American lawyer, venture capitalist, and author.
Born in 1984 in Middletown, Ohio, he grew up in a low-income household with a single mother.
He served in the US Marine Corps and later graduated from Yale Law School.
Vance gained recognition for his memoir 'Hillbilly Elegy,' which explores his family's struggles with poverty and addiction.
The book became a bestseller and was adapted into a Netflix film.
The failure to achieve consensus over a seemingly uncontroversial document has made achieving meaningful global governance of AI appear even more distant. The UK, a major player in AI development, also refused to sign the declaration, stating that it did not go far enough in addressing global governance and national security concerns.

Emmanuel Macron acknowledged the vast energy consumption required by AI, emphasizing France’s reliance on nuclear energy as a more sustainable option. He poked fun at Donald Trump’s focus on fossil fuels, saying ‘no need to drill‘ in France. This highlighted the differing national outlooks and competition among countries at the summit.
Despite ongoing concerns over AI safety, it was not a top priority at the Paris summit. Yoshua Bengio, a world-renowned computer scientist, expressed concern that the world is not addressing the implications of highly intelligent AIs. Sir Demis Hassabis, head of Google’s AI unit, called for unity in dealing with AI, emphasizing the need for focused international cooperation to address global concerns.
As artificial intelligence (AI) continues to advance, concerns about its safety have grown.
Researchers and developers are working to address these issues through the development of more robust testing protocols and the creation of guidelines for responsible AI design.
Key areas of focus include bias mitigation, data privacy, and the potential for AI systems to cause harm.
According to a report by the AI Now Institute, 71% of AI developers believe that their work has the potential to cause significant harm if not designed with safety in mind.
Experts at the summit highlighted the accelerating pace of change in AI development. Sam Altman, CEO of OpenAI, flagged the company’s latest product, Deep Research, which is an AI agent powered by a version of their cutting-edge model, o3. He warned that advanced AIs could represent ‘the largest change to the global labour market in human history.’
DeepSeek’s founder, Liang Wenfeng, did not attend the summit, but his achievements were discussed extensively. Guoqing, China’s vice-premier, offered to work with other countries to safeguard security and share AI achievements. However, concerns remain over China’s involvement in AI development.
Elon Musk’s consortium launched a near-$100bn bid for the non-profit that controls OpenAI, sparking questions about the future of the startup. Sam Altman reassured reporters that the company is not for sale and will preserve its non-profit arm.