OpenAI CEO Sam Altman raised concerns about the potential impact of new artificial intelligence legislation in the European Union (EU), stating that his company may “cease operating” in the EU if it cannot comply with the provisions of the upcoming AI act. Altman expressed his intention to try and comply with the regulations but highlighted several criticisms OpenAI has regarding the current wording of the act.
One key point of contention for OpenAI is the designation of “high-risk” systems in the EU law. While the law is still being revised, as it stands, large AI models like OpenAI’s ChatGPT and GPT-4 may be classified as “high risk.” This classification would impose additional safety requirements on the companies behind these models. OpenAI argues that its general-purpose systems are not inherently high-risk.
Altman acknowledged that OpenAI would make an effort to meet the requirements set by the EU AI Act but emphasized that there are technical limitations to what is feasible. He stated that if compliance is not possible, the company would have no choice but to cease operations in the EU.
While Altman acknowledged that the law itself is not inherently flawed, he emphasized that the nuanced details are crucial. He expressed his preference for a regulatory approach that falls between the traditional European and U.S. approaches.
Altman also expressed concerns about the risks associated with artificial intelligence, particularly the potential for AI-generated disinformation tailored to exploit personal biases, which could have implications for future elections. However, he noted that social media platforms play a more significant role in disseminating disinformation than AI language models.
Despite these concerns, Altman maintained an optimistic view of the technology’s future benefits, highlighting the potential positive impact it could have. He also touched upon the need to reconsider wealth distribution in an AI-driven future, suggesting that it would require a different approach compared to previous technological revolutions.
Altman revealed that OpenAI plans to publicly intervene on the topic of wealth redistribution in 2024, similar to its current engagement in AI regulatory policy. The company is currently conducting a five-year study on universal basic income, set to conclude next year, which will inform its future initiatives.
During Altman’s appearance at University College London, a small group of protesters gathered outside the venue. They expressed concerns about OpenAI’s pursuit of artificial general intelligence (AGI) and distributed flyers urging people to challenge Altman’s vision for the future. Altman engaged in a brief conversation with the protesters, acknowledging their concerns but maintaining confidence in OpenAI’s approach to safety and capabilities. The dialogue around AI development, its potential risks, and the need for responsible regulation continues to evolve, highlighting the complex challenges faced by companies like OpenAI and the broader AI community.