OpenAI CEO Sam Altman says his company may leave Europe if regulations surrounding AI become too stifling.
The European Union is currently considering the first set of rules to help globally govern the development of artificial intelligence technology. Companies that deploy generative AI tools like ChatGPT will be forced to disclose when any copyrighted material was used to develop their systems. Altman says his company “will try to comply” but may be forced to leave.
“The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back,” Altman said in an interview with Reuters. “They are still talking about it. “There’s so much they could do like changing the definition of general-purpose AI systems. There’s a lot of things that could be done.”
The EU AI Act seeks to classify AI into three risk categories. Some AI like social scoring systems used in China are considered ‘unacceptable risks’ or ‘violating fundamental rights.’ Meanwhile, a high-risk AI system must comply with across-the-board standards developed to help increase transparency and oversight of AI models. Altman’s concern is that under the current definition of high-risk, ChatGPT qualifies.
“If we can comply, we will and if we can’t, we’ll cease operating,” Altman told his audience at at a panel discussion hosted by the University College London. “We will try. But there are technical limits to what’s possible.” ChatGPT’s large language model that powers its chatbot was trained on private datasets scraped from the internet. Researchers have been able to extract verbatim text sequences from the model’s training data.
“These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs,” security researchers have disclosed as they’ve probed LLMs to see how results are offered when queried with prompts designed to seek this information.