OpenAI Considers Exit From Europe - Faces Planned Legislation from Regulators
OpenAI's CEO, Sam Altman, is currently exploring the UK and Europe on a PR-related "mini" world tour, and protesters have been following these proceedings with much interest. UK news outlets have reported that a demonstration took place outside of a university building in London yesterday, where the UCL Events organization hosted Altman as part of a fireside discussion about the benefits and problems relating to advanced AI systems. Attendees noted that Altman expressed optimism about AI's potential for the creation of more jobs and reduction in inequality - despite calls for a major pause on development. He also visited 10 Downing Street during the British leg of his PR journey - alongside other AI company leaders - to talk about potential risks (originating from his industry) with the UK's prime minister. Discussed topics were reported to include national security, existential threats and disinformation.
At the UCL event, Altman touched upon his recent meetings with European regulators, who are developing plans for advanced legislation that could lead to targeted laws (applicable to AI industries). He says that his company is "gonna try to comply" with these potential new rules and agrees that some form of regulation is necessary: "something between the traditional European approach and the traditional US approach" would be preferred. He took issue with the potential branding of large AI models (such as OpenAI's ChatGPT and GPT-4 applications) as "high risk" ventures via the European Union's AI Act provisions: "Either we'll be able to solve those requirements or not...If we can comply, we will, and if we can't, we'll cease operating… We will try. But there are technical limits to what's possible."
At the UCL event, Altman touched upon his recent meetings with European regulators, who are developing plans for advanced legislation that could lead to targeted laws (applicable to AI industries). He says that his company is "gonna try to comply" with these potential new rules and agrees that some form of regulation is necessary: "something between the traditional European approach and the traditional US approach" would be preferred. He took issue with the potential branding of large AI models (such as OpenAI's ChatGPT and GPT-4 applications) as "high risk" ventures via the European Union's AI Act provisions: "Either we'll be able to solve those requirements or not...If we can comply, we will, and if we can't, we'll cease operating… We will try. But there are technical limits to what's possible."