AI Needs ‘Rules of the Road’: Apple CEO Tim Cook
Apple's Tim Cook spoke with Dua Lipa about AI's potential and the necessity for regulatory frameworks.
Jason Nelson3 min read
Your Web3 Gaming Power-Up
Enjoy exclusive benefits with the GG Membership Pass
Decrypt’s Art, Fashion, and Entertainment Hub.
Discover SCENE
Apple CEO Tim Cook expressed optimism for the future of artificial intelligence, calling it life-changing with limitless possibilities—but emphasized that the technology needs regulations and guardrails to prevent curb misuse.
“It can be life-changing in a good way,” Cook said during an interview with singer Dua Lipa on the At Your Service podcast. “Because it can do things like in the future, I don't mean necessarily today, it can help diagnose a problem that you're having from a health point of view.”
Cook said that AI is present in all Apple products despite the company not labeling it as such.
ADAD
“If you're composing a message or an email on the phone, you'll see predictive typing, which tries to predict your next word so I can quickly choose the word. That's AI,” Cook said.
Tech companies have invested heavily in generative AI since the launch of OpenAI’s flagship model, GPT-3, last year. Since then, billions have flowed into the AI industry, including over $10 billion that Apple has reportedly invested in AI development, $10 billion from Microsoft in OpenAI, $4 billion from Amazon to Claude AI developer Athropic, and another $2 billion from Google also going to Anthropic.
Despite his optimism, however, Cook expressed caution.
“There's a limitless number of things that AI can do. Unfortunately, it can also do not good things,” he said.
Global leaders from the Vatican to the United Nations have sounded the alarm over the rise of AI-generated deepfakes. In October, the UK-based Internet Watch Foundation warned that AI-generated child abuse material was spreading rapidly on the darkweb.
ADAD
“What is needed with this new form of AI, generative AI, is some rules of the road and some regulation around this,” Cook said. “I think many governments around the world are now focused on this and focused on how to do it, and we're trying to help with that. And we're one of the first ones that say this is needed, that some regulation is as needed.”
Earlier this year, CEOs from leading AI developers signed onto a Biden Administration pledge to develop AI responsibly. While Microsoft, Meta, OpenAI, Google, Amazon, and others signed on, Apple did not attend.
In May, citing fears of data leaks and loss of intellectual property, Apple joined rival smartphone developer Samsung in prohibiting the use of ChatGPT in the workplace. In July, Bloomberg reported that Apple was quietly developing an AI chatbot to take on ChatGPT.
Apple, Cook said, is deliberate about approaching artificial intelligence, saying the tech giant thinks deeply about how people will use its products and if they can be used for nefarious reasons.
“I think most governments are a little behind the curve today. I think that's a sort of a fair assessment to make,” Cook said. “But I think they're quickly catching up.”
“I think the US, the UK, the EU, and several countries in Asia are quickly coming up to speed,” Cook said.
Earlier this month, 29 countries and the European Union committed to a unified approach to managing artificial intelligence. The Bletchley Declaration, named after the Bletchley Park location of the UK AI Safety Summit, called for global leaders to work together to ensure safety, transparency, and collaboration regarding generative AI.
“I do think there will be some AI regulation in the next 12 to 18 months, and so I'm pretty confident that will happen,” Cook said.
ADAD
Edited by Ryan Ozawa.